1 //===- Attributor.h --- Module-wide attribute deduction ---------*- C++ -*-===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // Attributor: An inter procedural (abstract) "attribute" deduction framework.
10 //
11 // The Attributor framework is an inter procedural abstract analysis (fixpoint
12 // iteration analysis). The goal is to allow easy deduction of new attributes as
13 // well as information exchange between abstract attributes in-flight.
14 //
15 // The Attributor class is the driver and the link between the various abstract
16 // attributes. The Attributor will iterate until a fixpoint state is reached by
17 // all abstract attributes in-flight, or until it will enforce a pessimistic fix
18 // point because an iteration limit is reached.
19 //
20 // Abstract attributes, derived from the AbstractAttribute class, actually
21 // describe properties of the code. They can correspond to actual LLVM-IR
22 // attributes, or they can be more general, ultimately unrelated to LLVM-IR
23 // attributes. The latter is useful when an abstract attributes provides
24 // information to other abstract attributes in-flight but we might not want to
25 // manifest the information. The Attributor allows to query in-flight abstract
26 // attributes through the `Attributor::getAAFor` method (see the method
27 // description for an example). If the method is used by an abstract attribute
28 // P, and it results in an abstract attribute Q, the Attributor will
29 // automatically capture a potential dependence from Q to P. This dependence
30 // will cause P to be reevaluated whenever Q changes in the future.
31 //
32 // The Attributor will only reevaluate abstract attributes that might have
33 // changed since the last iteration. That means that the Attribute will not
34 // revisit all instructions/blocks/functions in the module but only query
35 // an update from a subset of the abstract attributes.
36 //
37 // The update method `AbstractAttribute::updateImpl` is implemented by the
38 // specific "abstract attribute" subclasses. The method is invoked whenever the
39 // currently assumed state (see the AbstractState class) might not be valid
40 // anymore. This can, for example, happen if the state was dependent on another
41 // abstract attribute that changed. In every invocation, the update method has
42 // to adjust the internal state of an abstract attribute to a point that is
43 // justifiable by the underlying IR and the current state of abstract attributes
44 // in-flight. Since the IR is given and assumed to be valid, the information
45 // derived from it can be assumed to hold. However, information derived from
46 // other abstract attributes is conditional on various things. If the justifying
47 // state changed, the `updateImpl` has to revisit the situation and potentially
48 // find another justification or limit the optimistic assumes made.
49 //
50 // Change is the key in this framework. Until a state of no-change, thus a
51 // fixpoint, is reached, the Attributor will query the abstract attributes
52 // in-flight to re-evaluate their state. If the (current) state is too
53 // optimistic, hence it cannot be justified anymore through other abstract
54 // attributes or the state of the IR, the state of the abstract attribute will
55 // have to change. Generally, we assume abstract attribute state to be a finite
56 // height lattice and the update function to be monotone. However, these
57 // conditions are not enforced because the iteration limit will guarantee
58 // termination. If an optimistic fixpoint is reached, or a pessimistic fix
59 // point is enforced after a timeout, the abstract attributes are tasked to
60 // manifest their result in the IR for passes to come.
61 //
62 // Attribute manifestation is not mandatory. If desired, there is support to
63 // generate a single or multiple LLVM-IR attributes already in the helper struct
64 // IRAttribute. In the simplest case, a subclass inherits from IRAttribute with
65 // a proper Attribute::AttrKind as template parameter. The Attributor
66 // manifestation framework will then create and place a new attribute if it is
67 // allowed to do so (based on the abstract state). Other use cases can be
68 // achieved by overloading AbstractAttribute or IRAttribute methods.
69 //
70 //
71 // The "mechanics" of adding a new "abstract attribute":
72 // - Define a class (transitively) inheriting from AbstractAttribute and one
73 //   (which could be the same) that (transitively) inherits from AbstractState.
74 //   For the latter, consider the already available BooleanState and
75 //   {Inc,Dec,Bit}IntegerState if they fit your needs, e.g., you require only a
76 //   number tracking or bit-encoding.
77 // - Implement all pure methods. Also use overloading if the attribute is not
78 //   conforming with the "default" behavior: A (set of) LLVM-IR attribute(s) for
79 //   an argument, call site argument, function return value, or function. See
80 //   the class and method descriptions for more information on the two
81 //   "Abstract" classes and their respective methods.
82 // - Register opportunities for the new abstract attribute in the
83 //   `Attributor::identifyDefaultAbstractAttributes` method if it should be
84 //   counted as a 'default' attribute.
85 // - Add sufficient tests.
86 // - Add a Statistics object for bookkeeping. If it is a simple (set of)
87 //   attribute(s) manifested through the Attributor manifestation framework, see
88 //   the bookkeeping function in Attributor.cpp.
89 // - If instructions with a certain opcode are interesting to the attribute, add
90 //   that opcode to the switch in `Attributor::identifyAbstractAttributes`. This
91 //   will make it possible to query all those instructions through the
92 //   `InformationCache::getOpcodeInstMapForFunction` interface and eliminate the
93 //   need to traverse the IR repeatedly.
94 //
95 //===----------------------------------------------------------------------===//
96 
97 #ifndef LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
98 #define LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
99 
100 #include "llvm/ADT/DenseSet.h"
101 #include "llvm/ADT/GraphTraits.h"
102 #include "llvm/ADT/MapVector.h"
103 #include "llvm/ADT/STLExtras.h"
104 #include "llvm/ADT/SetVector.h"
105 #include "llvm/ADT/Triple.h"
106 #include "llvm/ADT/iterator.h"
107 #include "llvm/Analysis/AssumeBundleQueries.h"
108 #include "llvm/Analysis/CFG.h"
109 #include "llvm/Analysis/CGSCCPassManager.h"
110 #include "llvm/Analysis/LazyCallGraph.h"
111 #include "llvm/Analysis/LoopInfo.h"
112 #include "llvm/Analysis/MustExecute.h"
113 #include "llvm/Analysis/OptimizationRemarkEmitter.h"
114 #include "llvm/Analysis/PostDominators.h"
115 #include "llvm/Analysis/TargetLibraryInfo.h"
116 #include "llvm/IR/AbstractCallSite.h"
117 #include "llvm/IR/ConstantRange.h"
118 #include "llvm/IR/PassManager.h"
119 #include "llvm/Support/Allocator.h"
120 #include "llvm/Support/Casting.h"
121 #include "llvm/Support/GraphWriter.h"
122 #include "llvm/Support/TimeProfiler.h"
123 #include "llvm/Transforms/Utils/CallGraphUpdater.h"
124 
125 namespace llvm {
126 
127 struct AADepGraphNode;
128 struct AADepGraph;
129 struct Attributor;
130 struct AbstractAttribute;
131 struct InformationCache;
132 struct AAIsDead;
133 struct AttributorCallGraph;
134 
135 class AAManager;
136 class AAResults;
137 class Function;
138 
139 /// Abstract Attribute helper functions.
140 namespace AA {
141 
142 /// Return true if \p V is dynamically unique, that is, there are no two
143 /// "instances" of \p V at runtime with different values.
144 bool isDynamicallyUnique(Attributor &A, const AbstractAttribute &QueryingAA,
145                          const Value &V);
146 
147 /// Return true if \p V is a valid value in \p Scope, that is a constant or an
148 /// instruction/argument of \p Scope.
149 bool isValidInScope(const Value &V, const Function *Scope);
150 
151 /// Return true if \p V is a valid value at position \p CtxI, that is a
152 /// constant, an argument of the same function as \p CtxI, or an instruction in
153 /// that function that dominates \p CtxI.
154 bool isValidAtPosition(const Value &V, const Instruction &CtxI,
155                        InformationCache &InfoCache);
156 
157 /// Try to convert \p V to type \p Ty without introducing new instructions. If
158 /// this is not possible return `nullptr`. Note: this function basically knows
159 /// how to cast various constants.
160 Value *getWithType(Value &V, Type &Ty);
161 
162 /// Return the combination of \p A and \p B such that the result is a possible
163 /// value of both. \p B is potentially casted to match the type \p Ty or the
164 /// type of \p A if \p Ty is null.
165 ///
166 /// Examples:
167 ///        X + none  => X
168 /// not_none + undef => not_none
169 ///          V1 + V2 => nullptr
170 Optional<Value *>
171 combineOptionalValuesInAAValueLatice(const Optional<Value *> &A,
172                                      const Optional<Value *> &B, Type *Ty);
173 
174 /// Return the initial value of \p Obj with type \p Ty if that is a constant.
175 Constant *getInitialValueForObj(Value &Obj, Type &Ty);
176 
177 /// Collect all potential underlying objects of \p Ptr at position \p CtxI in
178 /// \p Objects. Assumed information is used and dependences onto \p QueryingAA
179 /// are added appropriately.
180 ///
181 /// \returns True if \p Objects contains all assumed underlying objects, and
182 ///          false if something went wrong and the objects could not be
183 ///          determined.
184 bool getAssumedUnderlyingObjects(Attributor &A, const Value &Ptr,
185                                  SmallVectorImpl<Value *> &Objects,
186                                  const AbstractAttribute &QueryingAA,
187                                  const Instruction *CtxI);
188 
189 /// Collect all potential values of the one stored by \p SI into
190 /// \p PotentialCopies. That is, the only copies that were made via the
191 /// store are assumed to be known and all in \p PotentialCopies. Dependences
192 /// onto \p QueryingAA are properly tracked, \p UsedAssumedInformation will
193 /// inform the caller if assumed information was used.
194 ///
195 /// \returns True if the assumed potential copies are all in \p PotentialCopies,
196 ///          false if something went wrong and the copies could not be
197 ///          determined.
198 bool getPotentialCopiesOfStoredValue(
199     Attributor &A, StoreInst &SI, SmallSetVector<Value *, 4> &PotentialCopies,
200     const AbstractAttribute &QueryingAA, bool &UsedAssumedInformation);
201 
202 } // namespace AA
203 
204 /// The value passed to the line option that defines the maximal initialization
205 /// chain length.
206 extern unsigned MaxInitializationChainLength;
207 
208 ///{
209 enum class ChangeStatus {
210   CHANGED,
211   UNCHANGED,
212 };
213 
214 ChangeStatus operator|(ChangeStatus l, ChangeStatus r);
215 ChangeStatus &operator|=(ChangeStatus &l, ChangeStatus r);
216 ChangeStatus operator&(ChangeStatus l, ChangeStatus r);
217 ChangeStatus &operator&=(ChangeStatus &l, ChangeStatus r);
218 
219 enum class DepClassTy {
220   REQUIRED, ///< The target cannot be valid if the source is not.
221   OPTIONAL, ///< The target may be valid if the source is not.
222   NONE,     ///< Do not track a dependence between source and target.
223 };
224 ///}
225 
226 /// The data structure for the nodes of a dependency graph
227 struct AADepGraphNode {
228 public:
229   virtual ~AADepGraphNode(){};
230   using DepTy = PointerIntPair<AADepGraphNode *, 1>;
231 
232 protected:
233   /// Set of dependency graph nodes which should be updated if this one
234   /// is updated. The bit encodes if it is optional.
235   TinyPtrVector<DepTy> Deps;
236 
237   static AADepGraphNode *DepGetVal(DepTy &DT) { return DT.getPointer(); }
238   static AbstractAttribute *DepGetValAA(DepTy &DT) {
239     return cast<AbstractAttribute>(DT.getPointer());
240   }
241 
242   operator AbstractAttribute *() { return cast<AbstractAttribute>(this); }
243 
244 public:
245   using iterator =
246       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetVal)>;
247   using aaiterator =
248       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetValAA)>;
249 
250   aaiterator begin() { return aaiterator(Deps.begin(), &DepGetValAA); }
251   aaiterator end() { return aaiterator(Deps.end(), &DepGetValAA); }
252   iterator child_begin() { return iterator(Deps.begin(), &DepGetVal); }
253   iterator child_end() { return iterator(Deps.end(), &DepGetVal); }
254 
255   virtual void print(raw_ostream &OS) const { OS << "AADepNode Impl\n"; }
256   TinyPtrVector<DepTy> &getDeps() { return Deps; }
257 
258   friend struct Attributor;
259   friend struct AADepGraph;
260 };
261 
262 /// The data structure for the dependency graph
263 ///
264 /// Note that in this graph if there is an edge from A to B (A -> B),
265 /// then it means that B depends on A, and when the state of A is
266 /// updated, node B should also be updated
267 struct AADepGraph {
268   AADepGraph() {}
269   ~AADepGraph() {}
270 
271   using DepTy = AADepGraphNode::DepTy;
272   static AADepGraphNode *DepGetVal(DepTy &DT) { return DT.getPointer(); }
273   using iterator =
274       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetVal)>;
275 
276   /// There is no root node for the dependency graph. But the SCCIterator
277   /// requires a single entry point, so we maintain a fake("synthetic") root
278   /// node that depends on every node.
279   AADepGraphNode SyntheticRoot;
280   AADepGraphNode *GetEntryNode() { return &SyntheticRoot; }
281 
282   iterator begin() { return SyntheticRoot.child_begin(); }
283   iterator end() { return SyntheticRoot.child_end(); }
284 
285   void viewGraph();
286 
287   /// Dump graph to file
288   void dumpGraph();
289 
290   /// Print dependency graph
291   void print();
292 };
293 
294 /// Helper to describe and deal with positions in the LLVM-IR.
295 ///
296 /// A position in the IR is described by an anchor value and an "offset" that
297 /// could be the argument number, for call sites and arguments, or an indicator
298 /// of the "position kind". The kinds, specified in the Kind enum below, include
299 /// the locations in the attribute list, i.a., function scope and return value,
300 /// as well as a distinction between call sites and functions. Finally, there
301 /// are floating values that do not have a corresponding attribute list
302 /// position.
303 struct IRPosition {
304   // NOTE: In the future this definition can be changed to support recursive
305   // functions.
306   using CallBaseContext = CallBase;
307 
308   /// The positions we distinguish in the IR.
309   enum Kind : char {
310     IRP_INVALID,  ///< An invalid position.
311     IRP_FLOAT,    ///< A position that is not associated with a spot suitable
312                   ///< for attributes. This could be any value or instruction.
313     IRP_RETURNED, ///< An attribute for the function return value.
314     IRP_CALL_SITE_RETURNED, ///< An attribute for a call site return value.
315     IRP_FUNCTION,           ///< An attribute for a function (scope).
316     IRP_CALL_SITE,          ///< An attribute for a call site (function scope).
317     IRP_ARGUMENT,           ///< An attribute for a function argument.
318     IRP_CALL_SITE_ARGUMENT, ///< An attribute for a call site argument.
319   };
320 
321   /// Default constructor available to create invalid positions implicitly. All
322   /// other positions need to be created explicitly through the appropriate
323   /// static member function.
324   IRPosition() : Enc(nullptr, ENC_VALUE) { verify(); }
325 
326   /// Create a position describing the value of \p V.
327   static const IRPosition value(const Value &V,
328                                 const CallBaseContext *CBContext = nullptr) {
329     if (auto *Arg = dyn_cast<Argument>(&V))
330       return IRPosition::argument(*Arg, CBContext);
331     if (auto *CB = dyn_cast<CallBase>(&V))
332       return IRPosition::callsite_returned(*CB);
333     return IRPosition(const_cast<Value &>(V), IRP_FLOAT, CBContext);
334   }
335 
336   /// Create a position describing the function scope of \p F.
337   /// \p CBContext is used for call base specific analysis.
338   static const IRPosition function(const Function &F,
339                                    const CallBaseContext *CBContext = nullptr) {
340     return IRPosition(const_cast<Function &>(F), IRP_FUNCTION, CBContext);
341   }
342 
343   /// Create a position describing the returned value of \p F.
344   /// \p CBContext is used for call base specific analysis.
345   static const IRPosition returned(const Function &F,
346                                    const CallBaseContext *CBContext = nullptr) {
347     return IRPosition(const_cast<Function &>(F), IRP_RETURNED, CBContext);
348   }
349 
350   /// Create a position describing the argument \p Arg.
351   /// \p CBContext is used for call base specific analysis.
352   static const IRPosition argument(const Argument &Arg,
353                                    const CallBaseContext *CBContext = nullptr) {
354     return IRPosition(const_cast<Argument &>(Arg), IRP_ARGUMENT, CBContext);
355   }
356 
357   /// Create a position describing the function scope of \p CB.
358   static const IRPosition callsite_function(const CallBase &CB) {
359     return IRPosition(const_cast<CallBase &>(CB), IRP_CALL_SITE);
360   }
361 
362   /// Create a position describing the returned value of \p CB.
363   static const IRPosition callsite_returned(const CallBase &CB) {
364     return IRPosition(const_cast<CallBase &>(CB), IRP_CALL_SITE_RETURNED);
365   }
366 
367   /// Create a position describing the argument of \p CB at position \p ArgNo.
368   static const IRPosition callsite_argument(const CallBase &CB,
369                                             unsigned ArgNo) {
370     return IRPosition(const_cast<Use &>(CB.getArgOperandUse(ArgNo)),
371                       IRP_CALL_SITE_ARGUMENT);
372   }
373 
374   /// Create a position describing the argument of \p ACS at position \p ArgNo.
375   static const IRPosition callsite_argument(AbstractCallSite ACS,
376                                             unsigned ArgNo) {
377     if (ACS.getNumArgOperands() <= ArgNo)
378       return IRPosition();
379     int CSArgNo = ACS.getCallArgOperandNo(ArgNo);
380     if (CSArgNo >= 0)
381       return IRPosition::callsite_argument(
382           cast<CallBase>(*ACS.getInstruction()), CSArgNo);
383     return IRPosition();
384   }
385 
386   /// Create a position with function scope matching the "context" of \p IRP.
387   /// If \p IRP is a call site (see isAnyCallSitePosition()) then the result
388   /// will be a call site position, otherwise the function position of the
389   /// associated function.
390   static const IRPosition
391   function_scope(const IRPosition &IRP,
392                  const CallBaseContext *CBContext = nullptr) {
393     if (IRP.isAnyCallSitePosition()) {
394       return IRPosition::callsite_function(
395           cast<CallBase>(IRP.getAnchorValue()));
396     }
397     assert(IRP.getAssociatedFunction());
398     return IRPosition::function(*IRP.getAssociatedFunction(), CBContext);
399   }
400 
401   bool operator==(const IRPosition &RHS) const {
402     return Enc == RHS.Enc && RHS.CBContext == CBContext;
403   }
404   bool operator!=(const IRPosition &RHS) const { return !(*this == RHS); }
405 
406   /// Return the value this abstract attribute is anchored with.
407   ///
408   /// The anchor value might not be the associated value if the latter is not
409   /// sufficient to determine where arguments will be manifested. This is, so
410   /// far, only the case for call site arguments as the value is not sufficient
411   /// to pinpoint them. Instead, we can use the call site as an anchor.
412   Value &getAnchorValue() const {
413     switch (getEncodingBits()) {
414     case ENC_VALUE:
415     case ENC_RETURNED_VALUE:
416     case ENC_FLOATING_FUNCTION:
417       return *getAsValuePtr();
418     case ENC_CALL_SITE_ARGUMENT_USE:
419       return *(getAsUsePtr()->getUser());
420     default:
421       llvm_unreachable("Unkown encoding!");
422     };
423   }
424 
425   /// Return the associated function, if any.
426   Function *getAssociatedFunction() const {
427     if (auto *CB = dyn_cast<CallBase>(&getAnchorValue())) {
428       // We reuse the logic that associates callback calles to arguments of a
429       // call site here to identify the callback callee as the associated
430       // function.
431       if (Argument *Arg = getAssociatedArgument())
432         return Arg->getParent();
433       return CB->getCalledFunction();
434     }
435     return getAnchorScope();
436   }
437 
438   /// Return the associated argument, if any.
439   Argument *getAssociatedArgument() const;
440 
441   /// Return true if the position refers to a function interface, that is the
442   /// function scope, the function return, or an argument.
443   bool isFnInterfaceKind() const {
444     switch (getPositionKind()) {
445     case IRPosition::IRP_FUNCTION:
446     case IRPosition::IRP_RETURNED:
447     case IRPosition::IRP_ARGUMENT:
448       return true;
449     default:
450       return false;
451     }
452   }
453 
454   /// Return the Function surrounding the anchor value.
455   Function *getAnchorScope() const {
456     Value &V = getAnchorValue();
457     if (isa<Function>(V))
458       return &cast<Function>(V);
459     if (isa<Argument>(V))
460       return cast<Argument>(V).getParent();
461     if (isa<Instruction>(V))
462       return cast<Instruction>(V).getFunction();
463     return nullptr;
464   }
465 
466   /// Return the context instruction, if any.
467   Instruction *getCtxI() const {
468     Value &V = getAnchorValue();
469     if (auto *I = dyn_cast<Instruction>(&V))
470       return I;
471     if (auto *Arg = dyn_cast<Argument>(&V))
472       if (!Arg->getParent()->isDeclaration())
473         return &Arg->getParent()->getEntryBlock().front();
474     if (auto *F = dyn_cast<Function>(&V))
475       if (!F->isDeclaration())
476         return &(F->getEntryBlock().front());
477     return nullptr;
478   }
479 
480   /// Return the value this abstract attribute is associated with.
481   Value &getAssociatedValue() const {
482     if (getCallSiteArgNo() < 0 || isa<Argument>(&getAnchorValue()))
483       return getAnchorValue();
484     assert(isa<CallBase>(&getAnchorValue()) && "Expected a call base!");
485     return *cast<CallBase>(&getAnchorValue())
486                 ->getArgOperand(getCallSiteArgNo());
487   }
488 
489   /// Return the type this abstract attribute is associated with.
490   Type *getAssociatedType() const {
491     if (getPositionKind() == IRPosition::IRP_RETURNED)
492       return getAssociatedFunction()->getReturnType();
493     return getAssociatedValue().getType();
494   }
495 
496   /// Return the callee argument number of the associated value if it is an
497   /// argument or call site argument, otherwise a negative value. In contrast to
498   /// `getCallSiteArgNo` this method will always return the "argument number"
499   /// from the perspective of the callee. This may not the same as the call site
500   /// if this is a callback call.
501   int getCalleeArgNo() const {
502     return getArgNo(/* CallbackCalleeArgIfApplicable */ true);
503   }
504 
505   /// Return the call site argument number of the associated value if it is an
506   /// argument or call site argument, otherwise a negative value. In contrast to
507   /// `getCalleArgNo` this method will always return the "operand number" from
508   /// the perspective of the call site. This may not the same as the callee
509   /// perspective if this is a callback call.
510   int getCallSiteArgNo() const {
511     return getArgNo(/* CallbackCalleeArgIfApplicable */ false);
512   }
513 
514   /// Return the index in the attribute list for this position.
515   unsigned getAttrIdx() const {
516     switch (getPositionKind()) {
517     case IRPosition::IRP_INVALID:
518     case IRPosition::IRP_FLOAT:
519       break;
520     case IRPosition::IRP_FUNCTION:
521     case IRPosition::IRP_CALL_SITE:
522       return AttributeList::FunctionIndex;
523     case IRPosition::IRP_RETURNED:
524     case IRPosition::IRP_CALL_SITE_RETURNED:
525       return AttributeList::ReturnIndex;
526     case IRPosition::IRP_ARGUMENT:
527     case IRPosition::IRP_CALL_SITE_ARGUMENT:
528       return getCallSiteArgNo() + AttributeList::FirstArgIndex;
529     }
530     llvm_unreachable(
531         "There is no attribute index for a floating or invalid position!");
532   }
533 
534   /// Return the associated position kind.
535   Kind getPositionKind() const {
536     char EncodingBits = getEncodingBits();
537     if (EncodingBits == ENC_CALL_SITE_ARGUMENT_USE)
538       return IRP_CALL_SITE_ARGUMENT;
539     if (EncodingBits == ENC_FLOATING_FUNCTION)
540       return IRP_FLOAT;
541 
542     Value *V = getAsValuePtr();
543     if (!V)
544       return IRP_INVALID;
545     if (isa<Argument>(V))
546       return IRP_ARGUMENT;
547     if (isa<Function>(V))
548       return isReturnPosition(EncodingBits) ? IRP_RETURNED : IRP_FUNCTION;
549     if (isa<CallBase>(V))
550       return isReturnPosition(EncodingBits) ? IRP_CALL_SITE_RETURNED
551                                             : IRP_CALL_SITE;
552     return IRP_FLOAT;
553   }
554 
555   /// TODO: Figure out if the attribute related helper functions should live
556   ///       here or somewhere else.
557 
558   /// Return true if any kind in \p AKs existing in the IR at a position that
559   /// will affect this one. See also getAttrs(...).
560   /// \param IgnoreSubsumingPositions Flag to determine if subsuming positions,
561   ///                                 e.g., the function position if this is an
562   ///                                 argument position, should be ignored.
563   bool hasAttr(ArrayRef<Attribute::AttrKind> AKs,
564                bool IgnoreSubsumingPositions = false,
565                Attributor *A = nullptr) const;
566 
567   /// Return the attributes of any kind in \p AKs existing in the IR at a
568   /// position that will affect this one. While each position can only have a
569   /// single attribute of any kind in \p AKs, there are "subsuming" positions
570   /// that could have an attribute as well. This method returns all attributes
571   /// found in \p Attrs.
572   /// \param IgnoreSubsumingPositions Flag to determine if subsuming positions,
573   ///                                 e.g., the function position if this is an
574   ///                                 argument position, should be ignored.
575   void getAttrs(ArrayRef<Attribute::AttrKind> AKs,
576                 SmallVectorImpl<Attribute> &Attrs,
577                 bool IgnoreSubsumingPositions = false,
578                 Attributor *A = nullptr) const;
579 
580   /// Remove the attribute of kind \p AKs existing in the IR at this position.
581   void removeAttrs(ArrayRef<Attribute::AttrKind> AKs) const {
582     if (getPositionKind() == IRP_INVALID || getPositionKind() == IRP_FLOAT)
583       return;
584 
585     AttributeList AttrList;
586     auto *CB = dyn_cast<CallBase>(&getAnchorValue());
587     if (CB)
588       AttrList = CB->getAttributes();
589     else
590       AttrList = getAssociatedFunction()->getAttributes();
591 
592     LLVMContext &Ctx = getAnchorValue().getContext();
593     for (Attribute::AttrKind AK : AKs)
594       AttrList = AttrList.removeAttribute(Ctx, getAttrIdx(), AK);
595 
596     if (CB)
597       CB->setAttributes(AttrList);
598     else
599       getAssociatedFunction()->setAttributes(AttrList);
600   }
601 
602   bool isAnyCallSitePosition() const {
603     switch (getPositionKind()) {
604     case IRPosition::IRP_CALL_SITE:
605     case IRPosition::IRP_CALL_SITE_RETURNED:
606     case IRPosition::IRP_CALL_SITE_ARGUMENT:
607       return true;
608     default:
609       return false;
610     }
611   }
612 
613   /// Return true if the position is an argument or call site argument.
614   bool isArgumentPosition() const {
615     switch (getPositionKind()) {
616     case IRPosition::IRP_ARGUMENT:
617     case IRPosition::IRP_CALL_SITE_ARGUMENT:
618       return true;
619     default:
620       return false;
621     }
622   }
623 
624   /// Return the same position without the call base context.
625   IRPosition stripCallBaseContext() const {
626     IRPosition Result = *this;
627     Result.CBContext = nullptr;
628     return Result;
629   }
630 
631   /// Get the call base context from the position.
632   const CallBaseContext *getCallBaseContext() const { return CBContext; }
633 
634   /// Check if the position has any call base context.
635   bool hasCallBaseContext() const { return CBContext != nullptr; }
636 
637   /// Special DenseMap key values.
638   ///
639   ///{
640   static const IRPosition EmptyKey;
641   static const IRPosition TombstoneKey;
642   ///}
643 
644   /// Conversion into a void * to allow reuse of pointer hashing.
645   operator void *() const { return Enc.getOpaqueValue(); }
646 
647 private:
648   /// Private constructor for special values only!
649   explicit IRPosition(void *Ptr, const CallBaseContext *CBContext = nullptr)
650       : CBContext(CBContext) {
651     Enc.setFromOpaqueValue(Ptr);
652   }
653 
654   /// IRPosition anchored at \p AnchorVal with kind/argument numbet \p PK.
655   explicit IRPosition(Value &AnchorVal, Kind PK,
656                       const CallBaseContext *CBContext = nullptr)
657       : CBContext(CBContext) {
658     switch (PK) {
659     case IRPosition::IRP_INVALID:
660       llvm_unreachable("Cannot create invalid IRP with an anchor value!");
661       break;
662     case IRPosition::IRP_FLOAT:
663       // Special case for floating functions.
664       if (isa<Function>(AnchorVal))
665         Enc = {&AnchorVal, ENC_FLOATING_FUNCTION};
666       else
667         Enc = {&AnchorVal, ENC_VALUE};
668       break;
669     case IRPosition::IRP_FUNCTION:
670     case IRPosition::IRP_CALL_SITE:
671       Enc = {&AnchorVal, ENC_VALUE};
672       break;
673     case IRPosition::IRP_RETURNED:
674     case IRPosition::IRP_CALL_SITE_RETURNED:
675       Enc = {&AnchorVal, ENC_RETURNED_VALUE};
676       break;
677     case IRPosition::IRP_ARGUMENT:
678       Enc = {&AnchorVal, ENC_VALUE};
679       break;
680     case IRPosition::IRP_CALL_SITE_ARGUMENT:
681       llvm_unreachable(
682           "Cannot create call site argument IRP with an anchor value!");
683       break;
684     }
685     verify();
686   }
687 
688   /// Return the callee argument number of the associated value if it is an
689   /// argument or call site argument. See also `getCalleeArgNo` and
690   /// `getCallSiteArgNo`.
691   int getArgNo(bool CallbackCalleeArgIfApplicable) const {
692     if (CallbackCalleeArgIfApplicable)
693       if (Argument *Arg = getAssociatedArgument())
694         return Arg->getArgNo();
695     switch (getPositionKind()) {
696     case IRPosition::IRP_ARGUMENT:
697       return cast<Argument>(getAsValuePtr())->getArgNo();
698     case IRPosition::IRP_CALL_SITE_ARGUMENT: {
699       Use &U = *getAsUsePtr();
700       return cast<CallBase>(U.getUser())->getArgOperandNo(&U);
701     }
702     default:
703       return -1;
704     }
705   }
706 
707   /// IRPosition for the use \p U. The position kind \p PK needs to be
708   /// IRP_CALL_SITE_ARGUMENT, the anchor value is the user, the associated value
709   /// the used value.
710   explicit IRPosition(Use &U, Kind PK) {
711     assert(PK == IRP_CALL_SITE_ARGUMENT &&
712            "Use constructor is for call site arguments only!");
713     Enc = {&U, ENC_CALL_SITE_ARGUMENT_USE};
714     verify();
715   }
716 
717   /// Verify internal invariants.
718   void verify();
719 
720   /// Return the attributes of kind \p AK existing in the IR as attribute.
721   bool getAttrsFromIRAttr(Attribute::AttrKind AK,
722                           SmallVectorImpl<Attribute> &Attrs) const;
723 
724   /// Return the attributes of kind \p AK existing in the IR as operand bundles
725   /// of an llvm.assume.
726   bool getAttrsFromAssumes(Attribute::AttrKind AK,
727                            SmallVectorImpl<Attribute> &Attrs,
728                            Attributor &A) const;
729 
730   /// Return the underlying pointer as Value *, valid for all positions but
731   /// IRP_CALL_SITE_ARGUMENT.
732   Value *getAsValuePtr() const {
733     assert(getEncodingBits() != ENC_CALL_SITE_ARGUMENT_USE &&
734            "Not a value pointer!");
735     return reinterpret_cast<Value *>(Enc.getPointer());
736   }
737 
738   /// Return the underlying pointer as Use *, valid only for
739   /// IRP_CALL_SITE_ARGUMENT positions.
740   Use *getAsUsePtr() const {
741     assert(getEncodingBits() == ENC_CALL_SITE_ARGUMENT_USE &&
742            "Not a value pointer!");
743     return reinterpret_cast<Use *>(Enc.getPointer());
744   }
745 
746   /// Return true if \p EncodingBits describe a returned or call site returned
747   /// position.
748   static bool isReturnPosition(char EncodingBits) {
749     return EncodingBits == ENC_RETURNED_VALUE;
750   }
751 
752   /// Return true if the encoding bits describe a returned or call site returned
753   /// position.
754   bool isReturnPosition() const { return isReturnPosition(getEncodingBits()); }
755 
756   /// The encoding of the IRPosition is a combination of a pointer and two
757   /// encoding bits. The values of the encoding bits are defined in the enum
758   /// below. The pointer is either a Value* (for the first three encoding bit
759   /// combinations) or Use* (for ENC_CALL_SITE_ARGUMENT_USE).
760   ///
761   ///{
762   enum {
763     ENC_VALUE = 0b00,
764     ENC_RETURNED_VALUE = 0b01,
765     ENC_FLOATING_FUNCTION = 0b10,
766     ENC_CALL_SITE_ARGUMENT_USE = 0b11,
767   };
768 
769   // Reserve the maximal amount of bits so there is no need to mask out the
770   // remaining ones. We will not encode anything else in the pointer anyway.
771   static constexpr int NumEncodingBits =
772       PointerLikeTypeTraits<void *>::NumLowBitsAvailable;
773   static_assert(NumEncodingBits >= 2, "At least two bits are required!");
774 
775   /// The pointer with the encoding bits.
776   PointerIntPair<void *, NumEncodingBits, char> Enc;
777   ///}
778 
779   /// Call base context. Used for callsite specific analysis.
780   const CallBaseContext *CBContext = nullptr;
781 
782   /// Return the encoding bits.
783   char getEncodingBits() const { return Enc.getInt(); }
784 };
785 
786 /// Helper that allows IRPosition as a key in a DenseMap.
787 template <> struct DenseMapInfo<IRPosition> {
788   static inline IRPosition getEmptyKey() { return IRPosition::EmptyKey; }
789   static inline IRPosition getTombstoneKey() {
790     return IRPosition::TombstoneKey;
791   }
792   static unsigned getHashValue(const IRPosition &IRP) {
793     return (DenseMapInfo<void *>::getHashValue(IRP) << 4) ^
794            (DenseMapInfo<Value *>::getHashValue(IRP.getCallBaseContext()));
795   }
796 
797   static bool isEqual(const IRPosition &a, const IRPosition &b) {
798     return a == b;
799   }
800 };
801 
802 /// A visitor class for IR positions.
803 ///
804 /// Given a position P, the SubsumingPositionIterator allows to visit "subsuming
805 /// positions" wrt. attributes/information. Thus, if a piece of information
806 /// holds for a subsuming position, it also holds for the position P.
807 ///
808 /// The subsuming positions always include the initial position and then,
809 /// depending on the position kind, additionally the following ones:
810 /// - for IRP_RETURNED:
811 ///   - the function (IRP_FUNCTION)
812 /// - for IRP_ARGUMENT:
813 ///   - the function (IRP_FUNCTION)
814 /// - for IRP_CALL_SITE:
815 ///   - the callee (IRP_FUNCTION), if known
816 /// - for IRP_CALL_SITE_RETURNED:
817 ///   - the callee (IRP_RETURNED), if known
818 ///   - the call site (IRP_FUNCTION)
819 ///   - the callee (IRP_FUNCTION), if known
820 /// - for IRP_CALL_SITE_ARGUMENT:
821 ///   - the argument of the callee (IRP_ARGUMENT), if known
822 ///   - the callee (IRP_FUNCTION), if known
823 ///   - the position the call site argument is associated with if it is not
824 ///     anchored to the call site, e.g., if it is an argument then the argument
825 ///     (IRP_ARGUMENT)
826 class SubsumingPositionIterator {
827   SmallVector<IRPosition, 4> IRPositions;
828   using iterator = decltype(IRPositions)::iterator;
829 
830 public:
831   SubsumingPositionIterator(const IRPosition &IRP);
832   iterator begin() { return IRPositions.begin(); }
833   iterator end() { return IRPositions.end(); }
834 };
835 
836 /// Wrapper for FunctoinAnalysisManager.
837 struct AnalysisGetter {
838   template <typename Analysis>
839   typename Analysis::Result *getAnalysis(const Function &F) {
840     if (!FAM || !F.getParent())
841       return nullptr;
842     return &FAM->getResult<Analysis>(const_cast<Function &>(F));
843   }
844 
845   AnalysisGetter(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
846   AnalysisGetter() {}
847 
848 private:
849   FunctionAnalysisManager *FAM = nullptr;
850 };
851 
852 /// Data structure to hold cached (LLVM-IR) information.
853 ///
854 /// All attributes are given an InformationCache object at creation time to
855 /// avoid inspection of the IR by all of them individually. This default
856 /// InformationCache will hold information required by 'default' attributes,
857 /// thus the ones deduced when Attributor::identifyDefaultAbstractAttributes(..)
858 /// is called.
859 ///
860 /// If custom abstract attributes, registered manually through
861 /// Attributor::registerAA(...), need more information, especially if it is not
862 /// reusable, it is advised to inherit from the InformationCache and cast the
863 /// instance down in the abstract attributes.
864 struct InformationCache {
865   InformationCache(const Module &M, AnalysisGetter &AG,
866                    BumpPtrAllocator &Allocator, SetVector<Function *> *CGSCC)
867       : DL(M.getDataLayout()), Allocator(Allocator),
868         Explorer(
869             /* ExploreInterBlock */ true, /* ExploreCFGForward */ true,
870             /* ExploreCFGBackward */ true,
871             /* LIGetter */
872             [&](const Function &F) { return AG.getAnalysis<LoopAnalysis>(F); },
873             /* DTGetter */
874             [&](const Function &F) {
875               return AG.getAnalysis<DominatorTreeAnalysis>(F);
876             },
877             /* PDTGetter */
878             [&](const Function &F) {
879               return AG.getAnalysis<PostDominatorTreeAnalysis>(F);
880             }),
881         AG(AG), CGSCC(CGSCC), TargetTriple(M.getTargetTriple()) {
882     if (CGSCC)
883       initializeModuleSlice(*CGSCC);
884   }
885 
886   ~InformationCache() {
887     // The FunctionInfo objects are allocated via a BumpPtrAllocator, we call
888     // the destructor manually.
889     for (auto &It : FuncInfoMap)
890       It.getSecond()->~FunctionInfo();
891   }
892 
893   /// Apply \p CB to all uses of \p F. If \p LookThroughConstantExprUses is
894   /// true, constant expression users are not given to \p CB but their uses are
895   /// traversed transitively.
896   template <typename CBTy>
897   static void foreachUse(Function &F, CBTy CB,
898                          bool LookThroughConstantExprUses = true) {
899     SmallVector<Use *, 8> Worklist(make_pointer_range(F.uses()));
900 
901     for (unsigned Idx = 0; Idx < Worklist.size(); ++Idx) {
902       Use &U = *Worklist[Idx];
903 
904       // Allow use in constant bitcasts and simply look through them.
905       if (LookThroughConstantExprUses && isa<ConstantExpr>(U.getUser())) {
906         for (Use &CEU : cast<ConstantExpr>(U.getUser())->uses())
907           Worklist.push_back(&CEU);
908         continue;
909       }
910 
911       CB(U);
912     }
913   }
914 
915   /// Initialize the ModuleSlice member based on \p SCC. ModuleSlices contains
916   /// (a subset of) all functions that we can look at during this SCC traversal.
917   /// This includes functions (transitively) called from the SCC and the
918   /// (transitive) callers of SCC functions. We also can look at a function if
919   /// there is a "reference edge", i.a., if the function somehow uses (!=calls)
920   /// a function in the SCC or a caller of a function in the SCC.
921   void initializeModuleSlice(SetVector<Function *> &SCC) {
922     ModuleSlice.insert(SCC.begin(), SCC.end());
923 
924     SmallPtrSet<Function *, 16> Seen;
925     SmallVector<Function *, 16> Worklist(SCC.begin(), SCC.end());
926     while (!Worklist.empty()) {
927       Function *F = Worklist.pop_back_val();
928       ModuleSlice.insert(F);
929 
930       for (Instruction &I : instructions(*F))
931         if (auto *CB = dyn_cast<CallBase>(&I))
932           if (Function *Callee = CB->getCalledFunction())
933             if (Seen.insert(Callee).second)
934               Worklist.push_back(Callee);
935     }
936 
937     Seen.clear();
938     Worklist.append(SCC.begin(), SCC.end());
939     while (!Worklist.empty()) {
940       Function *F = Worklist.pop_back_val();
941       ModuleSlice.insert(F);
942 
943       // Traverse all transitive uses.
944       foreachUse(*F, [&](Use &U) {
945         if (auto *UsrI = dyn_cast<Instruction>(U.getUser()))
946           if (Seen.insert(UsrI->getFunction()).second)
947             Worklist.push_back(UsrI->getFunction());
948       });
949     }
950   }
951 
952   /// The slice of the module we are allowed to look at.
953   SmallPtrSet<Function *, 8> ModuleSlice;
954 
955   /// A vector type to hold instructions.
956   using InstructionVectorTy = SmallVector<Instruction *, 8>;
957 
958   /// A map type from opcodes to instructions with this opcode.
959   using OpcodeInstMapTy = DenseMap<unsigned, InstructionVectorTy *>;
960 
961   /// Return the map that relates "interesting" opcodes with all instructions
962   /// with that opcode in \p F.
963   OpcodeInstMapTy &getOpcodeInstMapForFunction(const Function &F) {
964     return getFunctionInfo(F).OpcodeInstMap;
965   }
966 
967   /// Return the instructions in \p F that may read or write memory.
968   InstructionVectorTy &getReadOrWriteInstsForFunction(const Function &F) {
969     return getFunctionInfo(F).RWInsts;
970   }
971 
972   /// Return MustBeExecutedContextExplorer
973   MustBeExecutedContextExplorer &getMustBeExecutedContextExplorer() {
974     return Explorer;
975   }
976 
977   /// Return TargetLibraryInfo for function \p F.
978   TargetLibraryInfo *getTargetLibraryInfoForFunction(const Function &F) {
979     return AG.getAnalysis<TargetLibraryAnalysis>(F);
980   }
981 
982   /// Return AliasAnalysis Result for function \p F.
983   AAResults *getAAResultsForFunction(const Function &F);
984 
985   /// Return true if \p Arg is involved in a must-tail call, thus the argument
986   /// of the caller or callee.
987   bool isInvolvedInMustTailCall(const Argument &Arg) {
988     FunctionInfo &FI = getFunctionInfo(*Arg.getParent());
989     return FI.CalledViaMustTail || FI.ContainsMustTailCall;
990   }
991 
992   /// Return the analysis result from a pass \p AP for function \p F.
993   template <typename AP>
994   typename AP::Result *getAnalysisResultForFunction(const Function &F) {
995     return AG.getAnalysis<AP>(F);
996   }
997 
998   /// Return SCC size on call graph for function \p F or 0 if unknown.
999   unsigned getSccSize(const Function &F) {
1000     if (CGSCC && CGSCC->count(const_cast<Function *>(&F)))
1001       return CGSCC->size();
1002     return 0;
1003   }
1004 
1005   /// Return datalayout used in the module.
1006   const DataLayout &getDL() { return DL; }
1007 
1008   /// Return the map conaining all the knowledge we have from `llvm.assume`s.
1009   const RetainedKnowledgeMap &getKnowledgeMap() const { return KnowledgeMap; }
1010 
1011   /// Return if \p To is potentially reachable form \p From or not
1012   /// If the same query was answered, return cached result
1013   bool getPotentiallyReachable(const Instruction &From, const Instruction &To) {
1014     auto KeyPair = std::make_pair(&From, &To);
1015     auto Iter = PotentiallyReachableMap.find(KeyPair);
1016     if (Iter != PotentiallyReachableMap.end())
1017       return Iter->second;
1018     const Function &F = *From.getFunction();
1019     bool Result = true;
1020     if (From.getFunction() == To.getFunction())
1021       Result = isPotentiallyReachable(&From, &To, nullptr,
1022                                       AG.getAnalysis<DominatorTreeAnalysis>(F),
1023                                       AG.getAnalysis<LoopAnalysis>(F));
1024     PotentiallyReachableMap.insert(std::make_pair(KeyPair, Result));
1025     return Result;
1026   }
1027 
1028   /// Check whether \p F is part of module slice.
1029   bool isInModuleSlice(const Function &F) {
1030     return ModuleSlice.count(const_cast<Function *>(&F));
1031   }
1032 
1033   /// Return true if the stack (llvm::Alloca) can be accessed by other threads.
1034   bool stackIsAccessibleByOtherThreads() { return !targetIsGPU(); }
1035 
1036   /// Return true if the target is a GPU.
1037   bool targetIsGPU() {
1038     return TargetTriple.isAMDGPU() || TargetTriple.isNVPTX();
1039   }
1040 
1041 private:
1042   struct FunctionInfo {
1043     ~FunctionInfo();
1044 
1045     /// A nested map that remembers all instructions in a function with a
1046     /// certain instruction opcode (Instruction::getOpcode()).
1047     OpcodeInstMapTy OpcodeInstMap;
1048 
1049     /// A map from functions to their instructions that may read or write
1050     /// memory.
1051     InstructionVectorTy RWInsts;
1052 
1053     /// Function is called by a `musttail` call.
1054     bool CalledViaMustTail;
1055 
1056     /// Function contains a `musttail` call.
1057     bool ContainsMustTailCall;
1058   };
1059 
1060   /// A map type from functions to informatio about it.
1061   DenseMap<const Function *, FunctionInfo *> FuncInfoMap;
1062 
1063   /// Return information about the function \p F, potentially by creating it.
1064   FunctionInfo &getFunctionInfo(const Function &F) {
1065     FunctionInfo *&FI = FuncInfoMap[&F];
1066     if (!FI) {
1067       FI = new (Allocator) FunctionInfo();
1068       initializeInformationCache(F, *FI);
1069     }
1070     return *FI;
1071   }
1072 
1073   /// Initialize the function information cache \p FI for the function \p F.
1074   ///
1075   /// This method needs to be called for all function that might be looked at
1076   /// through the information cache interface *prior* to looking at them.
1077   void initializeInformationCache(const Function &F, FunctionInfo &FI);
1078 
1079   /// The datalayout used in the module.
1080   const DataLayout &DL;
1081 
1082   /// The allocator used to allocate memory, e.g. for `FunctionInfo`s.
1083   BumpPtrAllocator &Allocator;
1084 
1085   /// MustBeExecutedContextExplorer
1086   MustBeExecutedContextExplorer Explorer;
1087 
1088   /// A map with knowledge retained in `llvm.assume` instructions.
1089   RetainedKnowledgeMap KnowledgeMap;
1090 
1091   /// Getters for analysis.
1092   AnalysisGetter &AG;
1093 
1094   /// The underlying CGSCC, or null if not available.
1095   SetVector<Function *> *CGSCC;
1096 
1097   /// Set of inlineable functions
1098   SmallPtrSet<const Function *, 8> InlineableFunctions;
1099 
1100   /// A map for caching results of queries for isPotentiallyReachable
1101   DenseMap<std::pair<const Instruction *, const Instruction *>, bool>
1102       PotentiallyReachableMap;
1103 
1104   /// The triple describing the target machine.
1105   Triple TargetTriple;
1106 
1107   /// Give the Attributor access to the members so
1108   /// Attributor::identifyDefaultAbstractAttributes(...) can initialize them.
1109   friend struct Attributor;
1110 };
1111 
1112 /// The fixpoint analysis framework that orchestrates the attribute deduction.
1113 ///
1114 /// The Attributor provides a general abstract analysis framework (guided
1115 /// fixpoint iteration) as well as helper functions for the deduction of
1116 /// (LLVM-IR) attributes. However, also other code properties can be deduced,
1117 /// propagated, and ultimately manifested through the Attributor framework. This
1118 /// is particularly useful if these properties interact with attributes and a
1119 /// co-scheduled deduction allows to improve the solution. Even if not, thus if
1120 /// attributes/properties are completely isolated, they should use the
1121 /// Attributor framework to reduce the number of fixpoint iteration frameworks
1122 /// in the code base. Note that the Attributor design makes sure that isolated
1123 /// attributes are not impacted, in any way, by others derived at the same time
1124 /// if there is no cross-reasoning performed.
1125 ///
1126 /// The public facing interface of the Attributor is kept simple and basically
1127 /// allows abstract attributes to one thing, query abstract attributes
1128 /// in-flight. There are two reasons to do this:
1129 ///    a) The optimistic state of one abstract attribute can justify an
1130 ///       optimistic state of another, allowing to framework to end up with an
1131 ///       optimistic (=best possible) fixpoint instead of one based solely on
1132 ///       information in the IR.
1133 ///    b) This avoids reimplementing various kinds of lookups, e.g., to check
1134 ///       for existing IR attributes, in favor of a single lookups interface
1135 ///       provided by an abstract attribute subclass.
1136 ///
1137 /// NOTE: The mechanics of adding a new "concrete" abstract attribute are
1138 ///       described in the file comment.
1139 struct Attributor {
1140 
1141   using OptimizationRemarkGetter =
1142       function_ref<OptimizationRemarkEmitter &(Function *)>;
1143 
1144   /// Constructor
1145   ///
1146   /// \param Functions The set of functions we are deriving attributes for.
1147   /// \param InfoCache Cache to hold various information accessible for
1148   ///                  the abstract attributes.
1149   /// \param CGUpdater Helper to update an underlying call graph.
1150   /// \param Allowed If not null, a set limiting the attribute opportunities.
1151   /// \param DeleteFns Whether to delete functions.
1152   /// \param RewriteSignatures Whether to rewrite function signatures.
1153   /// \param MaxFixedPointIterations Maximum number of iterations to run until
1154   ///                                fixpoint.
1155   Attributor(SetVector<Function *> &Functions, InformationCache &InfoCache,
1156              CallGraphUpdater &CGUpdater,
1157              DenseSet<const char *> *Allowed = nullptr, bool DeleteFns = true,
1158              bool RewriteSignatures = true)
1159       : Allocator(InfoCache.Allocator), Functions(Functions),
1160         InfoCache(InfoCache), CGUpdater(CGUpdater), Allowed(Allowed),
1161         DeleteFns(DeleteFns), RewriteSignatures(RewriteSignatures),
1162         MaxFixpointIterations(None), OREGetter(None), PassName("") {}
1163 
1164   /// Constructor
1165   ///
1166   /// \param Functions The set of functions we are deriving attributes for.
1167   /// \param InfoCache Cache to hold various information accessible for
1168   ///                  the abstract attributes.
1169   /// \param CGUpdater Helper to update an underlying call graph.
1170   /// \param Allowed If not null, a set limiting the attribute opportunities.
1171   /// \param DeleteFns Whether to delete functions
1172   /// \param MaxFixedPointIterations Maximum number of iterations to run until
1173   ///                                fixpoint.
1174   /// \param OREGetter A callback function that returns an ORE object from a
1175   ///                  Function pointer.
1176   /// \param PassName  The name of the pass emitting remarks.
1177   Attributor(SetVector<Function *> &Functions, InformationCache &InfoCache,
1178              CallGraphUpdater &CGUpdater, DenseSet<const char *> *Allowed,
1179              bool DeleteFns, bool RewriteSignatures,
1180              Optional<unsigned> MaxFixpointIterations,
1181              OptimizationRemarkGetter OREGetter, const char *PassName)
1182       : Allocator(InfoCache.Allocator), Functions(Functions),
1183         InfoCache(InfoCache), CGUpdater(CGUpdater), Allowed(Allowed),
1184         DeleteFns(DeleteFns), RewriteSignatures(RewriteSignatures),
1185         MaxFixpointIterations(MaxFixpointIterations),
1186         OREGetter(Optional<OptimizationRemarkGetter>(OREGetter)),
1187         PassName(PassName) {}
1188 
1189   ~Attributor();
1190 
1191   /// Run the analyses until a fixpoint is reached or enforced (timeout).
1192   ///
1193   /// The attributes registered with this Attributor can be used after as long
1194   /// as the Attributor is not destroyed (it owns the attributes now).
1195   ///
1196   /// \Returns CHANGED if the IR was changed, otherwise UNCHANGED.
1197   ChangeStatus run();
1198 
1199   /// Lookup an abstract attribute of type \p AAType at position \p IRP. While
1200   /// no abstract attribute is found equivalent positions are checked, see
1201   /// SubsumingPositionIterator. Thus, the returned abstract attribute
1202   /// might be anchored at a different position, e.g., the callee if \p IRP is a
1203   /// call base.
1204   ///
1205   /// This method is the only (supported) way an abstract attribute can retrieve
1206   /// information from another abstract attribute. As an example, take an
1207   /// abstract attribute that determines the memory access behavior for a
1208   /// argument (readnone, readonly, ...). It should use `getAAFor` to get the
1209   /// most optimistic information for other abstract attributes in-flight, e.g.
1210   /// the one reasoning about the "captured" state for the argument or the one
1211   /// reasoning on the memory access behavior of the function as a whole.
1212   ///
1213   /// If the DepClass enum is set to `DepClassTy::None` the dependence from
1214   /// \p QueryingAA to the return abstract attribute is not automatically
1215   /// recorded. This should only be used if the caller will record the
1216   /// dependence explicitly if necessary, thus if it the returned abstract
1217   /// attribute is used for reasoning. To record the dependences explicitly use
1218   /// the `Attributor::recordDependence` method.
1219   template <typename AAType>
1220   const AAType &getAAFor(const AbstractAttribute &QueryingAA,
1221                          const IRPosition &IRP, DepClassTy DepClass) {
1222     return getOrCreateAAFor<AAType>(IRP, &QueryingAA, DepClass,
1223                                     /* ForceUpdate */ false);
1224   }
1225 
1226   /// Similar to getAAFor but the return abstract attribute will be updated (via
1227   /// `AbstractAttribute::update`) even if it is found in the cache. This is
1228   /// especially useful for AAIsDead as changes in liveness can make updates
1229   /// possible/useful that were not happening before as the abstract attribute
1230   /// was assumed dead.
1231   template <typename AAType>
1232   const AAType &getAndUpdateAAFor(const AbstractAttribute &QueryingAA,
1233                                   const IRPosition &IRP, DepClassTy DepClass) {
1234     return getOrCreateAAFor<AAType>(IRP, &QueryingAA, DepClass,
1235                                     /* ForceUpdate */ true);
1236   }
1237 
1238   /// The version of getAAFor that allows to omit a querying abstract
1239   /// attribute. Using this after Attributor started running is restricted to
1240   /// only the Attributor itself. Initial seeding of AAs can be done via this
1241   /// function.
1242   /// NOTE: ForceUpdate is ignored in any stage other than the update stage.
1243   template <typename AAType>
1244   const AAType &getOrCreateAAFor(IRPosition IRP,
1245                                  const AbstractAttribute *QueryingAA,
1246                                  DepClassTy DepClass, bool ForceUpdate = false,
1247                                  bool UpdateAfterInit = true) {
1248     if (!shouldPropagateCallBaseContext(IRP))
1249       IRP = IRP.stripCallBaseContext();
1250 
1251     if (AAType *AAPtr = lookupAAFor<AAType>(IRP, QueryingAA, DepClass,
1252                                             /* AllowInvalidState */ true)) {
1253       if (ForceUpdate && Phase == AttributorPhase::UPDATE)
1254         updateAA(*AAPtr);
1255       return *AAPtr;
1256     }
1257 
1258     // No matching attribute found, create one.
1259     // Use the static create method.
1260     auto &AA = AAType::createForPosition(IRP, *this);
1261 
1262     // If we are currenty seeding attributes, enforce seeding rules.
1263     if (Phase == AttributorPhase::SEEDING && !shouldSeedAttribute(AA)) {
1264       AA.getState().indicatePessimisticFixpoint();
1265       return AA;
1266     }
1267 
1268     registerAA(AA);
1269 
1270     // For now we ignore naked and optnone functions.
1271     bool Invalidate = Allowed && !Allowed->count(&AAType::ID);
1272     const Function *FnScope = IRP.getAnchorScope();
1273     if (FnScope)
1274       Invalidate |= FnScope->hasFnAttribute(Attribute::Naked) ||
1275                     FnScope->hasFnAttribute(Attribute::OptimizeNone);
1276 
1277     // Avoid too many nested initializations to prevent a stack overflow.
1278     Invalidate |= InitializationChainLength > MaxInitializationChainLength;
1279 
1280     // Bootstrap the new attribute with an initial update to propagate
1281     // information, e.g., function -> call site. If it is not on a given
1282     // Allowed we will not perform updates at all.
1283     if (Invalidate) {
1284       AA.getState().indicatePessimisticFixpoint();
1285       return AA;
1286     }
1287 
1288     {
1289       TimeTraceScope TimeScope(AA.getName() + "::initialize");
1290       ++InitializationChainLength;
1291       AA.initialize(*this);
1292       --InitializationChainLength;
1293     }
1294 
1295     // Initialize and update is allowed for code outside of the current function
1296     // set, but only if it is part of module slice we are allowed to look at.
1297     // Only exception is AAIsDeadFunction whose initialization is prevented
1298     // directly, since we don't to compute it twice.
1299     if (FnScope && !Functions.count(const_cast<Function *>(FnScope))) {
1300       if (!getInfoCache().isInModuleSlice(*FnScope)) {
1301         AA.getState().indicatePessimisticFixpoint();
1302         return AA;
1303       }
1304     }
1305 
1306     // If this is queried in the manifest stage, we force the AA to indicate
1307     // pessimistic fixpoint immediately.
1308     if (Phase == AttributorPhase::MANIFEST) {
1309       AA.getState().indicatePessimisticFixpoint();
1310       return AA;
1311     }
1312 
1313     // Allow seeded attributes to declare dependencies.
1314     // Remember the seeding state.
1315     if (UpdateAfterInit) {
1316       AttributorPhase OldPhase = Phase;
1317       Phase = AttributorPhase::UPDATE;
1318 
1319       updateAA(AA);
1320 
1321       Phase = OldPhase;
1322     }
1323 
1324     if (QueryingAA && AA.getState().isValidState())
1325       recordDependence(AA, const_cast<AbstractAttribute &>(*QueryingAA),
1326                        DepClass);
1327     return AA;
1328   }
1329   template <typename AAType>
1330   const AAType &getOrCreateAAFor(const IRPosition &IRP) {
1331     return getOrCreateAAFor<AAType>(IRP, /* QueryingAA */ nullptr,
1332                                     DepClassTy::NONE);
1333   }
1334 
1335   /// Return the attribute of \p AAType for \p IRP if existing and valid. This
1336   /// also allows non-AA users lookup.
1337   template <typename AAType>
1338   AAType *lookupAAFor(const IRPosition &IRP,
1339                       const AbstractAttribute *QueryingAA = nullptr,
1340                       DepClassTy DepClass = DepClassTy::OPTIONAL,
1341                       bool AllowInvalidState = false) {
1342     static_assert(std::is_base_of<AbstractAttribute, AAType>::value,
1343                   "Cannot query an attribute with a type not derived from "
1344                   "'AbstractAttribute'!");
1345     // Lookup the abstract attribute of type AAType. If found, return it after
1346     // registering a dependence of QueryingAA on the one returned attribute.
1347     AbstractAttribute *AAPtr = AAMap.lookup({&AAType::ID, IRP});
1348     if (!AAPtr)
1349       return nullptr;
1350 
1351     AAType *AA = static_cast<AAType *>(AAPtr);
1352 
1353     // Do not register a dependence on an attribute with an invalid state.
1354     if (DepClass != DepClassTy::NONE && QueryingAA &&
1355         AA->getState().isValidState())
1356       recordDependence(*AA, const_cast<AbstractAttribute &>(*QueryingAA),
1357                        DepClass);
1358 
1359     // Return nullptr if this attribute has an invalid state.
1360     if (!AllowInvalidState && !AA->getState().isValidState())
1361       return nullptr;
1362     return AA;
1363   }
1364 
1365   /// Explicitly record a dependence from \p FromAA to \p ToAA, that is if
1366   /// \p FromAA changes \p ToAA should be updated as well.
1367   ///
1368   /// This method should be used in conjunction with the `getAAFor` method and
1369   /// with the DepClass enum passed to the method set to None. This can
1370   /// be beneficial to avoid false dependences but it requires the users of
1371   /// `getAAFor` to explicitly record true dependences through this method.
1372   /// The \p DepClass flag indicates if the dependence is striclty necessary.
1373   /// That means for required dependences, if \p FromAA changes to an invalid
1374   /// state, \p ToAA can be moved to a pessimistic fixpoint because it required
1375   /// information from \p FromAA but none are available anymore.
1376   void recordDependence(const AbstractAttribute &FromAA,
1377                         const AbstractAttribute &ToAA, DepClassTy DepClass);
1378 
1379   /// Introduce a new abstract attribute into the fixpoint analysis.
1380   ///
1381   /// Note that ownership of the attribute is given to the Attributor. It will
1382   /// invoke delete for the Attributor on destruction of the Attributor.
1383   ///
1384   /// Attributes are identified by their IR position (AAType::getIRPosition())
1385   /// and the address of their static member (see AAType::ID).
1386   template <typename AAType> AAType &registerAA(AAType &AA) {
1387     static_assert(std::is_base_of<AbstractAttribute, AAType>::value,
1388                   "Cannot register an attribute with a type not derived from "
1389                   "'AbstractAttribute'!");
1390     // Put the attribute in the lookup map structure and the container we use to
1391     // keep track of all attributes.
1392     const IRPosition &IRP = AA.getIRPosition();
1393     AbstractAttribute *&AAPtr = AAMap[{&AAType::ID, IRP}];
1394 
1395     assert(!AAPtr && "Attribute already in map!");
1396     AAPtr = &AA;
1397 
1398     // Register AA with the synthetic root only before the manifest stage.
1399     if (Phase == AttributorPhase::SEEDING || Phase == AttributorPhase::UPDATE)
1400       DG.SyntheticRoot.Deps.push_back(
1401           AADepGraphNode::DepTy(&AA, unsigned(DepClassTy::REQUIRED)));
1402 
1403     return AA;
1404   }
1405 
1406   /// Return the internal information cache.
1407   InformationCache &getInfoCache() { return InfoCache; }
1408 
1409   /// Return true if this is a module pass, false otherwise.
1410   bool isModulePass() const {
1411     return !Functions.empty() &&
1412            Functions.size() == Functions.front()->getParent()->size();
1413   }
1414 
1415   /// Return true if we derive attributes for \p Fn
1416   bool isRunOn(Function &Fn) const {
1417     return Functions.empty() || Functions.count(&Fn);
1418   }
1419 
1420   /// Determine opportunities to derive 'default' attributes in \p F and create
1421   /// abstract attribute objects for them.
1422   ///
1423   /// \param F The function that is checked for attribute opportunities.
1424   ///
1425   /// Note that abstract attribute instances are generally created even if the
1426   /// IR already contains the information they would deduce. The most important
1427   /// reason for this is the single interface, the one of the abstract attribute
1428   /// instance, which can be queried without the need to look at the IR in
1429   /// various places.
1430   void identifyDefaultAbstractAttributes(Function &F);
1431 
1432   /// Determine whether the function \p F is IPO amendable
1433   ///
1434   /// If a function is exactly defined or it has alwaysinline attribute
1435   /// and is viable to be inlined, we say it is IPO amendable
1436   bool isFunctionIPOAmendable(const Function &F) {
1437     return F.hasExactDefinition() || InfoCache.InlineableFunctions.count(&F);
1438   }
1439 
1440   /// Mark the internal function \p F as live.
1441   ///
1442   /// This will trigger the identification and initialization of attributes for
1443   /// \p F.
1444   void markLiveInternalFunction(const Function &F) {
1445     assert(F.hasLocalLinkage() &&
1446            "Only local linkage is assumed dead initially.");
1447 
1448     identifyDefaultAbstractAttributes(const_cast<Function &>(F));
1449   }
1450 
1451   /// Helper function to remove callsite.
1452   void removeCallSite(CallInst *CI) {
1453     if (!CI)
1454       return;
1455 
1456     CGUpdater.removeCallSite(*CI);
1457   }
1458 
1459   /// Record that \p U is to be replaces with \p NV after information was
1460   /// manifested. This also triggers deletion of trivially dead istructions.
1461   bool changeUseAfterManifest(Use &U, Value &NV) {
1462     Value *&V = ToBeChangedUses[&U];
1463     if (V && (V->stripPointerCasts() == NV.stripPointerCasts() ||
1464               isa_and_nonnull<UndefValue>(V)))
1465       return false;
1466     assert((!V || V == &NV || isa<UndefValue>(NV)) &&
1467            "Use was registered twice for replacement with different values!");
1468     V = &NV;
1469     return true;
1470   }
1471 
1472   /// Helper function to replace all uses of \p V with \p NV. Return true if
1473   /// there is any change. The flag \p ChangeDroppable indicates if dropppable
1474   /// uses should be changed too.
1475   bool changeValueAfterManifest(Value &V, Value &NV,
1476                                 bool ChangeDroppable = true) {
1477     auto &Entry = ToBeChangedValues[&V];
1478     Value *&CurNV = Entry.first;
1479     if (CurNV && (CurNV->stripPointerCasts() == NV.stripPointerCasts() ||
1480                   isa<UndefValue>(CurNV)))
1481       return false;
1482     assert((!CurNV || CurNV == &NV || isa<UndefValue>(NV)) &&
1483            "Value replacement was registered twice with different values!");
1484     CurNV = &NV;
1485     Entry.second = ChangeDroppable;
1486     return true;
1487   }
1488 
1489   /// Record that \p I is to be replaced with `unreachable` after information
1490   /// was manifested.
1491   void changeToUnreachableAfterManifest(Instruction *I) {
1492     ToBeChangedToUnreachableInsts.insert(I);
1493   }
1494 
1495   /// Record that \p II has at least one dead successor block. This information
1496   /// is used, e.g., to replace \p II with a call, after information was
1497   /// manifested.
1498   void registerInvokeWithDeadSuccessor(InvokeInst &II) {
1499     InvokeWithDeadSuccessor.push_back(&II);
1500   }
1501 
1502   /// Record that \p I is deleted after information was manifested. This also
1503   /// triggers deletion of trivially dead istructions.
1504   void deleteAfterManifest(Instruction &I) { ToBeDeletedInsts.insert(&I); }
1505 
1506   /// Record that \p BB is deleted after information was manifested. This also
1507   /// triggers deletion of trivially dead istructions.
1508   void deleteAfterManifest(BasicBlock &BB) { ToBeDeletedBlocks.insert(&BB); }
1509 
1510   // Record that \p BB is added during the manifest of an AA. Added basic blocks
1511   // are preserved in the IR.
1512   void registerManifestAddedBasicBlock(BasicBlock &BB) {
1513     ManifestAddedBlocks.insert(&BB);
1514   }
1515 
1516   /// Record that \p F is deleted after information was manifested.
1517   void deleteAfterManifest(Function &F) {
1518     if (DeleteFns)
1519       ToBeDeletedFunctions.insert(&F);
1520   }
1521 
1522   /// If \p IRP is assumed to be a constant, return it, if it is unclear yet,
1523   /// return None, otherwise return `nullptr`.
1524   Optional<Constant *> getAssumedConstant(const IRPosition &IRP,
1525                                           const AbstractAttribute &AA,
1526                                           bool &UsedAssumedInformation);
1527   Optional<Constant *> getAssumedConstant(const Value &V,
1528                                           const AbstractAttribute &AA,
1529                                           bool &UsedAssumedInformation) {
1530     return getAssumedConstant(IRPosition::value(V), AA, UsedAssumedInformation);
1531   }
1532 
1533   /// If \p V is assumed simplified, return it, if it is unclear yet,
1534   /// return None, otherwise return `nullptr`.
1535   Optional<Value *> getAssumedSimplified(const IRPosition &IRP,
1536                                          const AbstractAttribute &AA,
1537                                          bool &UsedAssumedInformation) {
1538     return getAssumedSimplified(IRP, &AA, UsedAssumedInformation);
1539   }
1540   Optional<Value *> getAssumedSimplified(const Value &V,
1541                                          const AbstractAttribute &AA,
1542                                          bool &UsedAssumedInformation) {
1543     return getAssumedSimplified(IRPosition::value(V), AA,
1544                                 UsedAssumedInformation);
1545   }
1546 
1547   /// If \p V is assumed simplified, return it, if it is unclear yet,
1548   /// return None, otherwise return `nullptr`. Same as the public version
1549   /// except that it can be used without recording dependences on any \p AA.
1550   Optional<Value *> getAssumedSimplified(const IRPosition &V,
1551                                          const AbstractAttribute *AA,
1552                                          bool &UsedAssumedInformation);
1553 
1554   /// Register \p CB as a simplification callback.
1555   /// `Attributor::getAssumedSimplified` will use these callbacks before
1556   /// we it will ask `AAValueSimplify`. It is important to ensure this
1557   /// is called before `identifyDefaultAbstractAttributes`, assuming the
1558   /// latter is called at all.
1559   using SimplifictionCallbackTy = std::function<Optional<Value *>(
1560       const IRPosition &, const AbstractAttribute *, bool &)>;
1561   void registerSimplificationCallback(const IRPosition &IRP,
1562                                       const SimplifictionCallbackTy &CB) {
1563     SimplificationCallbacks[IRP].emplace_back(CB);
1564   }
1565 
1566   /// Return true if there is a simplification callback for \p IRP.
1567   bool hasSimplificationCallback(const IRPosition &IRP) {
1568     return SimplificationCallbacks.count(IRP);
1569   }
1570 
1571 private:
1572   /// The vector with all simplification callbacks registered by outside AAs.
1573   DenseMap<IRPosition, SmallVector<SimplifictionCallbackTy, 1>>
1574       SimplificationCallbacks;
1575 
1576 public:
1577   /// Translate \p V from the callee context into the call site context.
1578   Optional<Value *>
1579   translateArgumentToCallSiteContent(Optional<Value *> V, CallBase &CB,
1580                                      const AbstractAttribute &AA,
1581                                      bool &UsedAssumedInformation);
1582 
1583   /// Return true if \p AA (or its context instruction) is assumed dead.
1584   ///
1585   /// If \p LivenessAA is not provided it is queried.
1586   bool isAssumedDead(const AbstractAttribute &AA, const AAIsDead *LivenessAA,
1587                      bool &UsedAssumedInformation,
1588                      bool CheckBBLivenessOnly = false,
1589                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1590 
1591   /// Return true if \p I is assumed dead.
1592   ///
1593   /// If \p LivenessAA is not provided it is queried.
1594   bool isAssumedDead(const Instruction &I, const AbstractAttribute *QueryingAA,
1595                      const AAIsDead *LivenessAA, bool &UsedAssumedInformation,
1596                      bool CheckBBLivenessOnly = false,
1597                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1598 
1599   /// Return true if \p U is assumed dead.
1600   ///
1601   /// If \p FnLivenessAA is not provided it is queried.
1602   bool isAssumedDead(const Use &U, const AbstractAttribute *QueryingAA,
1603                      const AAIsDead *FnLivenessAA, bool &UsedAssumedInformation,
1604                      bool CheckBBLivenessOnly = false,
1605                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1606 
1607   /// Return true if \p IRP is assumed dead.
1608   ///
1609   /// If \p FnLivenessAA is not provided it is queried.
1610   bool isAssumedDead(const IRPosition &IRP, const AbstractAttribute *QueryingAA,
1611                      const AAIsDead *FnLivenessAA, bool &UsedAssumedInformation,
1612                      bool CheckBBLivenessOnly = false,
1613                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1614 
1615   /// Return true if \p BB is assumed dead.
1616   ///
1617   /// If \p LivenessAA is not provided it is queried.
1618   bool isAssumedDead(const BasicBlock &BB, const AbstractAttribute *QueryingAA,
1619                      const AAIsDead *FnLivenessAA,
1620                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1621 
1622   /// Check \p Pred on all (transitive) uses of \p V.
1623   ///
1624   /// This method will evaluate \p Pred on all (transitive) uses of the
1625   /// associated value and return true if \p Pred holds every time.
1626   bool checkForAllUses(function_ref<bool(const Use &, bool &)> Pred,
1627                        const AbstractAttribute &QueryingAA, const Value &V,
1628                        bool CheckBBLivenessOnly = false,
1629                        DepClassTy LivenessDepClass = DepClassTy::OPTIONAL);
1630 
1631   /// Emit a remark generically.
1632   ///
1633   /// This template function can be used to generically emit a remark. The
1634   /// RemarkKind should be one of the following:
1635   ///   - OptimizationRemark to indicate a successful optimization attempt
1636   ///   - OptimizationRemarkMissed to report a failed optimization attempt
1637   ///   - OptimizationRemarkAnalysis to provide additional information about an
1638   ///     optimization attempt
1639   ///
1640   /// The remark is built using a callback function \p RemarkCB that takes a
1641   /// RemarkKind as input and returns a RemarkKind.
1642   template <typename RemarkKind, typename RemarkCallBack>
1643   void emitRemark(Instruction *I, StringRef RemarkName,
1644                   RemarkCallBack &&RemarkCB) const {
1645     if (!OREGetter)
1646       return;
1647 
1648     Function *F = I->getFunction();
1649     auto &ORE = OREGetter.getValue()(F);
1650 
1651     if (RemarkName.startswith("OMP"))
1652       ORE.emit([&]() {
1653         return RemarkCB(RemarkKind(PassName, RemarkName, I))
1654                << " [" << RemarkName << "]";
1655       });
1656     else
1657       ORE.emit([&]() { return RemarkCB(RemarkKind(PassName, RemarkName, I)); });
1658   }
1659 
1660   /// Emit a remark on a function.
1661   template <typename RemarkKind, typename RemarkCallBack>
1662   void emitRemark(Function *F, StringRef RemarkName,
1663                   RemarkCallBack &&RemarkCB) const {
1664     if (!OREGetter)
1665       return;
1666 
1667     auto &ORE = OREGetter.getValue()(F);
1668 
1669     if (RemarkName.startswith("OMP"))
1670       ORE.emit([&]() {
1671         return RemarkCB(RemarkKind(PassName, RemarkName, F))
1672                << " [" << RemarkName << "]";
1673       });
1674     else
1675       ORE.emit([&]() { return RemarkCB(RemarkKind(PassName, RemarkName, F)); });
1676   }
1677 
1678   /// Helper struct used in the communication between an abstract attribute (AA)
1679   /// that wants to change the signature of a function and the Attributor which
1680   /// applies the changes. The struct is partially initialized with the
1681   /// information from the AA (see the constructor). All other members are
1682   /// provided by the Attributor prior to invoking any callbacks.
1683   struct ArgumentReplacementInfo {
1684     /// Callee repair callback type
1685     ///
1686     /// The function repair callback is invoked once to rewire the replacement
1687     /// arguments in the body of the new function. The argument replacement info
1688     /// is passed, as build from the registerFunctionSignatureRewrite call, as
1689     /// well as the replacement function and an iteratore to the first
1690     /// replacement argument.
1691     using CalleeRepairCBTy = std::function<void(
1692         const ArgumentReplacementInfo &, Function &, Function::arg_iterator)>;
1693 
1694     /// Abstract call site (ACS) repair callback type
1695     ///
1696     /// The abstract call site repair callback is invoked once on every abstract
1697     /// call site of the replaced function (\see ReplacedFn). The callback needs
1698     /// to provide the operands for the call to the new replacement function.
1699     /// The number and type of the operands appended to the provided vector
1700     /// (second argument) is defined by the number and types determined through
1701     /// the replacement type vector (\see ReplacementTypes). The first argument
1702     /// is the ArgumentReplacementInfo object registered with the Attributor
1703     /// through the registerFunctionSignatureRewrite call.
1704     using ACSRepairCBTy =
1705         std::function<void(const ArgumentReplacementInfo &, AbstractCallSite,
1706                            SmallVectorImpl<Value *> &)>;
1707 
1708     /// Simple getters, see the corresponding members for details.
1709     ///{
1710 
1711     Attributor &getAttributor() const { return A; }
1712     const Function &getReplacedFn() const { return ReplacedFn; }
1713     const Argument &getReplacedArg() const { return ReplacedArg; }
1714     unsigned getNumReplacementArgs() const { return ReplacementTypes.size(); }
1715     const SmallVectorImpl<Type *> &getReplacementTypes() const {
1716       return ReplacementTypes;
1717     }
1718 
1719     ///}
1720 
1721   private:
1722     /// Constructor that takes the argument to be replaced, the types of
1723     /// the replacement arguments, as well as callbacks to repair the call sites
1724     /// and new function after the replacement happened.
1725     ArgumentReplacementInfo(Attributor &A, Argument &Arg,
1726                             ArrayRef<Type *> ReplacementTypes,
1727                             CalleeRepairCBTy &&CalleeRepairCB,
1728                             ACSRepairCBTy &&ACSRepairCB)
1729         : A(A), ReplacedFn(*Arg.getParent()), ReplacedArg(Arg),
1730           ReplacementTypes(ReplacementTypes.begin(), ReplacementTypes.end()),
1731           CalleeRepairCB(std::move(CalleeRepairCB)),
1732           ACSRepairCB(std::move(ACSRepairCB)) {}
1733 
1734     /// Reference to the attributor to allow access from the callbacks.
1735     Attributor &A;
1736 
1737     /// The "old" function replaced by ReplacementFn.
1738     const Function &ReplacedFn;
1739 
1740     /// The "old" argument replaced by new ones defined via ReplacementTypes.
1741     const Argument &ReplacedArg;
1742 
1743     /// The types of the arguments replacing ReplacedArg.
1744     const SmallVector<Type *, 8> ReplacementTypes;
1745 
1746     /// Callee repair callback, see CalleeRepairCBTy.
1747     const CalleeRepairCBTy CalleeRepairCB;
1748 
1749     /// Abstract call site (ACS) repair callback, see ACSRepairCBTy.
1750     const ACSRepairCBTy ACSRepairCB;
1751 
1752     /// Allow access to the private members from the Attributor.
1753     friend struct Attributor;
1754   };
1755 
1756   /// Check if we can rewrite a function signature.
1757   ///
1758   /// The argument \p Arg is replaced with new ones defined by the number,
1759   /// order, and types in \p ReplacementTypes.
1760   ///
1761   /// \returns True, if the replacement can be registered, via
1762   /// registerFunctionSignatureRewrite, false otherwise.
1763   bool isValidFunctionSignatureRewrite(Argument &Arg,
1764                                        ArrayRef<Type *> ReplacementTypes);
1765 
1766   /// Register a rewrite for a function signature.
1767   ///
1768   /// The argument \p Arg is replaced with new ones defined by the number,
1769   /// order, and types in \p ReplacementTypes. The rewiring at the call sites is
1770   /// done through \p ACSRepairCB and at the callee site through
1771   /// \p CalleeRepairCB.
1772   ///
1773   /// \returns True, if the replacement was registered, false otherwise.
1774   bool registerFunctionSignatureRewrite(
1775       Argument &Arg, ArrayRef<Type *> ReplacementTypes,
1776       ArgumentReplacementInfo::CalleeRepairCBTy &&CalleeRepairCB,
1777       ArgumentReplacementInfo::ACSRepairCBTy &&ACSRepairCB);
1778 
1779   /// Check \p Pred on all function call sites.
1780   ///
1781   /// This method will evaluate \p Pred on call sites and return
1782   /// true if \p Pred holds in every call sites. However, this is only possible
1783   /// all call sites are known, hence the function has internal linkage.
1784   /// If true is returned, \p AllCallSitesKnown is set if all possible call
1785   /// sites of the function have been visited.
1786   bool checkForAllCallSites(function_ref<bool(AbstractCallSite)> Pred,
1787                             const AbstractAttribute &QueryingAA,
1788                             bool RequireAllCallSites, bool &AllCallSitesKnown);
1789 
1790   /// Check \p Pred on all values potentially returned by \p F.
1791   ///
1792   /// This method will evaluate \p Pred on all values potentially returned by
1793   /// the function associated with \p QueryingAA. The returned values are
1794   /// matched with their respective return instructions. Returns true if \p Pred
1795   /// holds on all of them.
1796   bool checkForAllReturnedValuesAndReturnInsts(
1797       function_ref<bool(Value &, const SmallSetVector<ReturnInst *, 4> &)> Pred,
1798       const AbstractAttribute &QueryingAA);
1799 
1800   /// Check \p Pred on all values potentially returned by the function
1801   /// associated with \p QueryingAA.
1802   ///
1803   /// This is the context insensitive version of the method above.
1804   bool checkForAllReturnedValues(function_ref<bool(Value &)> Pred,
1805                                  const AbstractAttribute &QueryingAA);
1806 
1807   /// Check \p Pred on all instructions with an opcode present in \p Opcodes.
1808   ///
1809   /// This method will evaluate \p Pred on all instructions with an opcode
1810   /// present in \p Opcode and return true if \p Pred holds on all of them.
1811   bool checkForAllInstructions(function_ref<bool(Instruction &)> Pred,
1812                                const AbstractAttribute &QueryingAA,
1813                                const ArrayRef<unsigned> &Opcodes,
1814                                bool &UsedAssumedInformation,
1815                                bool CheckBBLivenessOnly = false,
1816                                bool CheckPotentiallyDead = false);
1817 
1818   /// Check \p Pred on all call-like instructions (=CallBased derived).
1819   ///
1820   /// See checkForAllCallLikeInstructions(...) for more information.
1821   bool checkForAllCallLikeInstructions(function_ref<bool(Instruction &)> Pred,
1822                                        const AbstractAttribute &QueryingAA,
1823                                        bool &UsedAssumedInformation,
1824                                        bool CheckBBLivenessOnly = false,
1825                                        bool CheckPotentiallyDead = false) {
1826     return checkForAllInstructions(
1827         Pred, QueryingAA,
1828         {(unsigned)Instruction::Invoke, (unsigned)Instruction::CallBr,
1829          (unsigned)Instruction::Call},
1830         UsedAssumedInformation, CheckBBLivenessOnly, CheckPotentiallyDead);
1831   }
1832 
1833   /// Check \p Pred on all Read/Write instructions.
1834   ///
1835   /// This method will evaluate \p Pred on all instructions that read or write
1836   /// to memory present in the information cache and return true if \p Pred
1837   /// holds on all of them.
1838   bool checkForAllReadWriteInstructions(function_ref<bool(Instruction &)> Pred,
1839                                         AbstractAttribute &QueryingAA,
1840                                         bool &UsedAssumedInformation);
1841 
1842   /// Create a shallow wrapper for \p F such that \p F has internal linkage
1843   /// afterwards. It also sets the original \p F 's name to anonymous
1844   ///
1845   /// A wrapper is a function with the same type (and attributes) as \p F
1846   /// that will only call \p F and return the result, if any.
1847   ///
1848   /// Assuming the declaration of looks like:
1849   ///   rty F(aty0 arg0, ..., atyN argN);
1850   ///
1851   /// The wrapper will then look as follows:
1852   ///   rty wrapper(aty0 arg0, ..., atyN argN) {
1853   ///     return F(arg0, ..., argN);
1854   ///   }
1855   ///
1856   static void createShallowWrapper(Function &F);
1857 
1858   /// Returns true if the function \p F can be internalized. i.e. it has a
1859   /// compatible linkage.
1860   static bool isInternalizable(Function &F);
1861 
1862   /// Make another copy of the function \p F such that the copied version has
1863   /// internal linkage afterwards and can be analysed. Then we replace all uses
1864   /// of the original function to the copied one
1865   ///
1866   /// Only non-locally linked functions that have `linkonce_odr` or `weak_odr`
1867   /// linkage can be internalized because these linkages guarantee that other
1868   /// definitions with the same name have the same semantics as this one.
1869   ///
1870   /// This will only be run if the `attributor-allow-deep-wrappers` option is
1871   /// set, or if the function is called with \p Force set to true.
1872   ///
1873   /// If the function \p F failed to be internalized the return value will be a
1874   /// null pointer.
1875   static Function *internalizeFunction(Function &F, bool Force = false);
1876 
1877   /// Make copies of each function in the set \p FnSet such that the copied
1878   /// version has internal linkage afterwards and can be analysed. Then we
1879   /// replace all uses of the original function to the copied one. The map
1880   /// \p FnMap contains a mapping of functions to their internalized versions.
1881   ///
1882   /// Only non-locally linked functions that have `linkonce_odr` or `weak_odr`
1883   /// linkage can be internalized because these linkages guarantee that other
1884   /// definitions with the same name have the same semantics as this one.
1885   ///
1886   /// This version will internalize all the functions in the set \p FnSet at
1887   /// once and then replace the uses. This prevents internalized functions being
1888   /// called by external functions when there is an internalized version in the
1889   /// module.
1890   static bool internalizeFunctions(SmallPtrSetImpl<Function *> &FnSet,
1891                                    DenseMap<Function *, Function *> &FnMap);
1892 
1893   /// Return the data layout associated with the anchor scope.
1894   const DataLayout &getDataLayout() const { return InfoCache.DL; }
1895 
1896   /// The allocator used to allocate memory, e.g. for `AbstractAttribute`s.
1897   BumpPtrAllocator &Allocator;
1898 
1899 private:
1900   /// This method will do fixpoint iteration until fixpoint or the
1901   /// maximum iteration count is reached.
1902   ///
1903   /// If the maximum iteration count is reached, This method will
1904   /// indicate pessimistic fixpoint on attributes that transitively depend
1905   /// on attributes that were scheduled for an update.
1906   void runTillFixpoint();
1907 
1908   /// Gets called after scheduling, manifests attributes to the LLVM IR.
1909   ChangeStatus manifestAttributes();
1910 
1911   /// Gets called after attributes have been manifested, cleans up the IR.
1912   /// Deletes dead functions, blocks and instructions.
1913   /// Rewrites function signitures and updates the call graph.
1914   ChangeStatus cleanupIR();
1915 
1916   /// Identify internal functions that are effectively dead, thus not reachable
1917   /// from a live entry point. The functions are added to ToBeDeletedFunctions.
1918   void identifyDeadInternalFunctions();
1919 
1920   /// Run `::update` on \p AA and track the dependences queried while doing so.
1921   /// Also adjust the state if we know further updates are not necessary.
1922   ChangeStatus updateAA(AbstractAttribute &AA);
1923 
1924   /// Remember the dependences on the top of the dependence stack such that they
1925   /// may trigger further updates. (\see DependenceStack)
1926   void rememberDependences();
1927 
1928   /// Check \p Pred on all call sites of \p Fn.
1929   ///
1930   /// This method will evaluate \p Pred on call sites and return
1931   /// true if \p Pred holds in every call sites. However, this is only possible
1932   /// all call sites are known, hence the function has internal linkage.
1933   /// If true is returned, \p AllCallSitesKnown is set if all possible call
1934   /// sites of the function have been visited.
1935   bool checkForAllCallSites(function_ref<bool(AbstractCallSite)> Pred,
1936                             const Function &Fn, bool RequireAllCallSites,
1937                             const AbstractAttribute *QueryingAA,
1938                             bool &AllCallSitesKnown);
1939 
1940   /// Determine if CallBase context in \p IRP should be propagated.
1941   bool shouldPropagateCallBaseContext(const IRPosition &IRP);
1942 
1943   /// Apply all requested function signature rewrites
1944   /// (\see registerFunctionSignatureRewrite) and return Changed if the module
1945   /// was altered.
1946   ChangeStatus
1947   rewriteFunctionSignatures(SmallPtrSetImpl<Function *> &ModifiedFns);
1948 
1949   /// Check if the Attribute \p AA should be seeded.
1950   /// See getOrCreateAAFor.
1951   bool shouldSeedAttribute(AbstractAttribute &AA);
1952 
1953   /// A nested map to lookup abstract attributes based on the argument position
1954   /// on the outer level, and the addresses of the static member (AAType::ID) on
1955   /// the inner level.
1956   ///{
1957   using AAMapKeyTy = std::pair<const char *, IRPosition>;
1958   DenseMap<AAMapKeyTy, AbstractAttribute *> AAMap;
1959   ///}
1960 
1961   /// Map to remember all requested signature changes (= argument replacements).
1962   DenseMap<Function *, SmallVector<std::unique_ptr<ArgumentReplacementInfo>, 8>>
1963       ArgumentReplacementMap;
1964 
1965   /// The set of functions we are deriving attributes for.
1966   SetVector<Function *> &Functions;
1967 
1968   /// The information cache that holds pre-processed (LLVM-IR) information.
1969   InformationCache &InfoCache;
1970 
1971   /// Helper to update an underlying call graph.
1972   CallGraphUpdater &CGUpdater;
1973 
1974   /// Abstract Attribute dependency graph
1975   AADepGraph DG;
1976 
1977   /// Set of functions for which we modified the content such that it might
1978   /// impact the call graph.
1979   SmallPtrSet<Function *, 8> CGModifiedFunctions;
1980 
1981   /// Information about a dependence. If FromAA is changed ToAA needs to be
1982   /// updated as well.
1983   struct DepInfo {
1984     const AbstractAttribute *FromAA;
1985     const AbstractAttribute *ToAA;
1986     DepClassTy DepClass;
1987   };
1988 
1989   /// The dependence stack is used to track dependences during an
1990   /// `AbstractAttribute::update` call. As `AbstractAttribute::update` can be
1991   /// recursive we might have multiple vectors of dependences in here. The stack
1992   /// size, should be adjusted according to the expected recursion depth and the
1993   /// inner dependence vector size to the expected number of dependences per
1994   /// abstract attribute. Since the inner vectors are actually allocated on the
1995   /// stack we can be generous with their size.
1996   using DependenceVector = SmallVector<DepInfo, 8>;
1997   SmallVector<DependenceVector *, 16> DependenceStack;
1998 
1999   /// If not null, a set limiting the attribute opportunities.
2000   const DenseSet<const char *> *Allowed;
2001 
2002   /// Whether to delete functions.
2003   const bool DeleteFns;
2004 
2005   /// Whether to rewrite signatures.
2006   const bool RewriteSignatures;
2007 
2008   /// Maximum number of fixedpoint iterations.
2009   Optional<unsigned> MaxFixpointIterations;
2010 
2011   /// A set to remember the functions we already assume to be live and visited.
2012   DenseSet<const Function *> VisitedFunctions;
2013 
2014   /// Uses we replace with a new value after manifest is done. We will remove
2015   /// then trivially dead instructions as well.
2016   DenseMap<Use *, Value *> ToBeChangedUses;
2017 
2018   /// Values we replace with a new value after manifest is done. We will remove
2019   /// then trivially dead instructions as well.
2020   DenseMap<Value *, std::pair<Value *, bool>> ToBeChangedValues;
2021 
2022   /// Instructions we replace with `unreachable` insts after manifest is done.
2023   SmallDenseSet<WeakVH, 16> ToBeChangedToUnreachableInsts;
2024 
2025   /// Invoke instructions with at least a single dead successor block.
2026   SmallVector<WeakVH, 16> InvokeWithDeadSuccessor;
2027 
2028   /// A flag that indicates which stage of the process we are in. Initially, the
2029   /// phase is SEEDING. Phase is changed in `Attributor::run()`
2030   enum class AttributorPhase {
2031     SEEDING,
2032     UPDATE,
2033     MANIFEST,
2034     CLEANUP,
2035   } Phase = AttributorPhase::SEEDING;
2036 
2037   /// The current initialization chain length. Tracked to avoid stack overflows.
2038   unsigned InitializationChainLength = 0;
2039 
2040   /// Functions, blocks, and instructions we delete after manifest is done.
2041   ///
2042   ///{
2043   SmallPtrSet<Function *, 8> ToBeDeletedFunctions;
2044   SmallPtrSet<BasicBlock *, 8> ToBeDeletedBlocks;
2045   SmallPtrSet<BasicBlock *, 8> ManifestAddedBlocks;
2046   SmallDenseSet<WeakVH, 8> ToBeDeletedInsts;
2047   ///}
2048 
2049   /// Callback to get an OptimizationRemarkEmitter from a Function *.
2050   Optional<OptimizationRemarkGetter> OREGetter;
2051 
2052   /// The name of the pass to emit remarks for.
2053   const char *PassName = "";
2054 
2055   friend AADepGraph;
2056   friend AttributorCallGraph;
2057 };
2058 
2059 /// An interface to query the internal state of an abstract attribute.
2060 ///
2061 /// The abstract state is a minimal interface that allows the Attributor to
2062 /// communicate with the abstract attributes about their internal state without
2063 /// enforcing or exposing implementation details, e.g., the (existence of an)
2064 /// underlying lattice.
2065 ///
2066 /// It is sufficient to be able to query if a state is (1) valid or invalid, (2)
2067 /// at a fixpoint, and to indicate to the state that (3) an optimistic fixpoint
2068 /// was reached or (4) a pessimistic fixpoint was enforced.
2069 ///
2070 /// All methods need to be implemented by the subclass. For the common use case,
2071 /// a single boolean state or a bit-encoded state, the BooleanState and
2072 /// {Inc,Dec,Bit}IntegerState classes are already provided. An abstract
2073 /// attribute can inherit from them to get the abstract state interface and
2074 /// additional methods to directly modify the state based if needed. See the
2075 /// class comments for help.
2076 struct AbstractState {
2077   virtual ~AbstractState() {}
2078 
2079   /// Return if this abstract state is in a valid state. If false, no
2080   /// information provided should be used.
2081   virtual bool isValidState() const = 0;
2082 
2083   /// Return if this abstract state is fixed, thus does not need to be updated
2084   /// if information changes as it cannot change itself.
2085   virtual bool isAtFixpoint() const = 0;
2086 
2087   /// Indicate that the abstract state should converge to the optimistic state.
2088   ///
2089   /// This will usually make the optimistically assumed state the known to be
2090   /// true state.
2091   ///
2092   /// \returns ChangeStatus::UNCHANGED as the assumed value should not change.
2093   virtual ChangeStatus indicateOptimisticFixpoint() = 0;
2094 
2095   /// Indicate that the abstract state should converge to the pessimistic state.
2096   ///
2097   /// This will usually revert the optimistically assumed state to the known to
2098   /// be true state.
2099   ///
2100   /// \returns ChangeStatus::CHANGED as the assumed value may change.
2101   virtual ChangeStatus indicatePessimisticFixpoint() = 0;
2102 };
2103 
2104 /// Simple state with integers encoding.
2105 ///
2106 /// The interface ensures that the assumed bits are always a subset of the known
2107 /// bits. Users can only add known bits and, except through adding known bits,
2108 /// they can only remove assumed bits. This should guarantee monotoniticy and
2109 /// thereby the existence of a fixpoint (if used corretly). The fixpoint is
2110 /// reached when the assumed and known state/bits are equal. Users can
2111 /// force/inidicate a fixpoint. If an optimistic one is indicated, the known
2112 /// state will catch up with the assumed one, for a pessimistic fixpoint it is
2113 /// the other way around.
2114 template <typename base_ty, base_ty BestState, base_ty WorstState>
2115 struct IntegerStateBase : public AbstractState {
2116   using base_t = base_ty;
2117 
2118   IntegerStateBase() {}
2119   IntegerStateBase(base_t Assumed) : Assumed(Assumed) {}
2120 
2121   /// Return the best possible representable state.
2122   static constexpr base_t getBestState() { return BestState; }
2123   static constexpr base_t getBestState(const IntegerStateBase &) {
2124     return getBestState();
2125   }
2126 
2127   /// Return the worst possible representable state.
2128   static constexpr base_t getWorstState() { return WorstState; }
2129   static constexpr base_t getWorstState(const IntegerStateBase &) {
2130     return getWorstState();
2131   }
2132 
2133   /// See AbstractState::isValidState()
2134   /// NOTE: For now we simply pretend that the worst possible state is invalid.
2135   bool isValidState() const override { return Assumed != getWorstState(); }
2136 
2137   /// See AbstractState::isAtFixpoint()
2138   bool isAtFixpoint() const override { return Assumed == Known; }
2139 
2140   /// See AbstractState::indicateOptimisticFixpoint(...)
2141   ChangeStatus indicateOptimisticFixpoint() override {
2142     Known = Assumed;
2143     return ChangeStatus::UNCHANGED;
2144   }
2145 
2146   /// See AbstractState::indicatePessimisticFixpoint(...)
2147   ChangeStatus indicatePessimisticFixpoint() override {
2148     Assumed = Known;
2149     return ChangeStatus::CHANGED;
2150   }
2151 
2152   /// Return the known state encoding
2153   base_t getKnown() const { return Known; }
2154 
2155   /// Return the assumed state encoding.
2156   base_t getAssumed() const { return Assumed; }
2157 
2158   /// Equality for IntegerStateBase.
2159   bool
2160   operator==(const IntegerStateBase<base_t, BestState, WorstState> &R) const {
2161     return this->getAssumed() == R.getAssumed() &&
2162            this->getKnown() == R.getKnown();
2163   }
2164 
2165   /// Inequality for IntegerStateBase.
2166   bool
2167   operator!=(const IntegerStateBase<base_t, BestState, WorstState> &R) const {
2168     return !(*this == R);
2169   }
2170 
2171   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2172   /// intended that only information assumed in both states will be assumed in
2173   /// this one afterwards.
2174   void operator^=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2175     handleNewAssumedValue(R.getAssumed());
2176   }
2177 
2178   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2179   /// intended that information known in either state will be known in
2180   /// this one afterwards.
2181   void operator+=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2182     handleNewKnownValue(R.getKnown());
2183   }
2184 
2185   void operator|=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2186     joinOR(R.getAssumed(), R.getKnown());
2187   }
2188 
2189   void operator&=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2190     joinAND(R.getAssumed(), R.getKnown());
2191   }
2192 
2193 protected:
2194   /// Handle a new assumed value \p Value. Subtype dependent.
2195   virtual void handleNewAssumedValue(base_t Value) = 0;
2196 
2197   /// Handle a new known value \p Value. Subtype dependent.
2198   virtual void handleNewKnownValue(base_t Value) = 0;
2199 
2200   /// Handle a  value \p Value. Subtype dependent.
2201   virtual void joinOR(base_t AssumedValue, base_t KnownValue) = 0;
2202 
2203   /// Handle a new assumed value \p Value. Subtype dependent.
2204   virtual void joinAND(base_t AssumedValue, base_t KnownValue) = 0;
2205 
2206   /// The known state encoding in an integer of type base_t.
2207   base_t Known = getWorstState();
2208 
2209   /// The assumed state encoding in an integer of type base_t.
2210   base_t Assumed = getBestState();
2211 };
2212 
2213 /// Specialization of the integer state for a bit-wise encoding.
2214 template <typename base_ty = uint32_t, base_ty BestState = ~base_ty(0),
2215           base_ty WorstState = 0>
2216 struct BitIntegerState
2217     : public IntegerStateBase<base_ty, BestState, WorstState> {
2218   using base_t = base_ty;
2219 
2220   /// Return true if the bits set in \p BitsEncoding are "known bits".
2221   bool isKnown(base_t BitsEncoding) const {
2222     return (this->Known & BitsEncoding) == BitsEncoding;
2223   }
2224 
2225   /// Return true if the bits set in \p BitsEncoding are "assumed bits".
2226   bool isAssumed(base_t BitsEncoding) const {
2227     return (this->Assumed & BitsEncoding) == BitsEncoding;
2228   }
2229 
2230   /// Add the bits in \p BitsEncoding to the "known bits".
2231   BitIntegerState &addKnownBits(base_t Bits) {
2232     // Make sure we never miss any "known bits".
2233     this->Assumed |= Bits;
2234     this->Known |= Bits;
2235     return *this;
2236   }
2237 
2238   /// Remove the bits in \p BitsEncoding from the "assumed bits" if not known.
2239   BitIntegerState &removeAssumedBits(base_t BitsEncoding) {
2240     return intersectAssumedBits(~BitsEncoding);
2241   }
2242 
2243   /// Remove the bits in \p BitsEncoding from the "known bits".
2244   BitIntegerState &removeKnownBits(base_t BitsEncoding) {
2245     this->Known = (this->Known & ~BitsEncoding);
2246     return *this;
2247   }
2248 
2249   /// Keep only "assumed bits" also set in \p BitsEncoding but all known ones.
2250   BitIntegerState &intersectAssumedBits(base_t BitsEncoding) {
2251     // Make sure we never loose any "known bits".
2252     this->Assumed = (this->Assumed & BitsEncoding) | this->Known;
2253     return *this;
2254   }
2255 
2256 private:
2257   void handleNewAssumedValue(base_t Value) override {
2258     intersectAssumedBits(Value);
2259   }
2260   void handleNewKnownValue(base_t Value) override { addKnownBits(Value); }
2261   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2262     this->Known |= KnownValue;
2263     this->Assumed |= AssumedValue;
2264   }
2265   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2266     this->Known &= KnownValue;
2267     this->Assumed &= AssumedValue;
2268   }
2269 };
2270 
2271 /// Specialization of the integer state for an increasing value, hence ~0u is
2272 /// the best state and 0 the worst.
2273 template <typename base_ty = uint32_t, base_ty BestState = ~base_ty(0),
2274           base_ty WorstState = 0>
2275 struct IncIntegerState
2276     : public IntegerStateBase<base_ty, BestState, WorstState> {
2277   using super = IntegerStateBase<base_ty, BestState, WorstState>;
2278   using base_t = base_ty;
2279 
2280   IncIntegerState() : super() {}
2281   IncIntegerState(base_t Assumed) : super(Assumed) {}
2282 
2283   /// Return the best possible representable state.
2284   static constexpr base_t getBestState() { return BestState; }
2285   static constexpr base_t
2286   getBestState(const IncIntegerState<base_ty, BestState, WorstState> &) {
2287     return getBestState();
2288   }
2289 
2290   /// Take minimum of assumed and \p Value.
2291   IncIntegerState &takeAssumedMinimum(base_t Value) {
2292     // Make sure we never loose "known value".
2293     this->Assumed = std::max(std::min(this->Assumed, Value), this->Known);
2294     return *this;
2295   }
2296 
2297   /// Take maximum of known and \p Value.
2298   IncIntegerState &takeKnownMaximum(base_t Value) {
2299     // Make sure we never loose "known value".
2300     this->Assumed = std::max(Value, this->Assumed);
2301     this->Known = std::max(Value, this->Known);
2302     return *this;
2303   }
2304 
2305 private:
2306   void handleNewAssumedValue(base_t Value) override {
2307     takeAssumedMinimum(Value);
2308   }
2309   void handleNewKnownValue(base_t Value) override { takeKnownMaximum(Value); }
2310   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2311     this->Known = std::max(this->Known, KnownValue);
2312     this->Assumed = std::max(this->Assumed, AssumedValue);
2313   }
2314   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2315     this->Known = std::min(this->Known, KnownValue);
2316     this->Assumed = std::min(this->Assumed, AssumedValue);
2317   }
2318 };
2319 
2320 /// Specialization of the integer state for a decreasing value, hence 0 is the
2321 /// best state and ~0u the worst.
2322 template <typename base_ty = uint32_t>
2323 struct DecIntegerState : public IntegerStateBase<base_ty, 0, ~base_ty(0)> {
2324   using base_t = base_ty;
2325 
2326   /// Take maximum of assumed and \p Value.
2327   DecIntegerState &takeAssumedMaximum(base_t Value) {
2328     // Make sure we never loose "known value".
2329     this->Assumed = std::min(std::max(this->Assumed, Value), this->Known);
2330     return *this;
2331   }
2332 
2333   /// Take minimum of known and \p Value.
2334   DecIntegerState &takeKnownMinimum(base_t Value) {
2335     // Make sure we never loose "known value".
2336     this->Assumed = std::min(Value, this->Assumed);
2337     this->Known = std::min(Value, this->Known);
2338     return *this;
2339   }
2340 
2341 private:
2342   void handleNewAssumedValue(base_t Value) override {
2343     takeAssumedMaximum(Value);
2344   }
2345   void handleNewKnownValue(base_t Value) override { takeKnownMinimum(Value); }
2346   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2347     this->Assumed = std::min(this->Assumed, KnownValue);
2348     this->Assumed = std::min(this->Assumed, AssumedValue);
2349   }
2350   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2351     this->Assumed = std::max(this->Assumed, KnownValue);
2352     this->Assumed = std::max(this->Assumed, AssumedValue);
2353   }
2354 };
2355 
2356 /// Simple wrapper for a single bit (boolean) state.
2357 struct BooleanState : public IntegerStateBase<bool, 1, 0> {
2358   using super = IntegerStateBase<bool, 1, 0>;
2359   using base_t = IntegerStateBase::base_t;
2360 
2361   BooleanState() : super() {}
2362   BooleanState(base_t Assumed) : super(Assumed) {}
2363 
2364   /// Set the assumed value to \p Value but never below the known one.
2365   void setAssumed(bool Value) { Assumed &= (Known | Value); }
2366 
2367   /// Set the known and asssumed value to \p Value.
2368   void setKnown(bool Value) {
2369     Known |= Value;
2370     Assumed |= Value;
2371   }
2372 
2373   /// Return true if the state is assumed to hold.
2374   bool isAssumed() const { return getAssumed(); }
2375 
2376   /// Return true if the state is known to hold.
2377   bool isKnown() const { return getKnown(); }
2378 
2379 private:
2380   void handleNewAssumedValue(base_t Value) override {
2381     if (!Value)
2382       Assumed = Known;
2383   }
2384   void handleNewKnownValue(base_t Value) override {
2385     if (Value)
2386       Known = (Assumed = Value);
2387   }
2388   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2389     Known |= KnownValue;
2390     Assumed |= AssumedValue;
2391   }
2392   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2393     Known &= KnownValue;
2394     Assumed &= AssumedValue;
2395   }
2396 };
2397 
2398 /// State for an integer range.
2399 struct IntegerRangeState : public AbstractState {
2400 
2401   /// Bitwidth of the associated value.
2402   uint32_t BitWidth;
2403 
2404   /// State representing assumed range, initially set to empty.
2405   ConstantRange Assumed;
2406 
2407   /// State representing known range, initially set to [-inf, inf].
2408   ConstantRange Known;
2409 
2410   IntegerRangeState(uint32_t BitWidth)
2411       : BitWidth(BitWidth), Assumed(ConstantRange::getEmpty(BitWidth)),
2412         Known(ConstantRange::getFull(BitWidth)) {}
2413 
2414   IntegerRangeState(const ConstantRange &CR)
2415       : BitWidth(CR.getBitWidth()), Assumed(CR),
2416         Known(getWorstState(CR.getBitWidth())) {}
2417 
2418   /// Return the worst possible representable state.
2419   static ConstantRange getWorstState(uint32_t BitWidth) {
2420     return ConstantRange::getFull(BitWidth);
2421   }
2422 
2423   /// Return the best possible representable state.
2424   static ConstantRange getBestState(uint32_t BitWidth) {
2425     return ConstantRange::getEmpty(BitWidth);
2426   }
2427   static ConstantRange getBestState(const IntegerRangeState &IRS) {
2428     return getBestState(IRS.getBitWidth());
2429   }
2430 
2431   /// Return associated values' bit width.
2432   uint32_t getBitWidth() const { return BitWidth; }
2433 
2434   /// See AbstractState::isValidState()
2435   bool isValidState() const override {
2436     return BitWidth > 0 && !Assumed.isFullSet();
2437   }
2438 
2439   /// See AbstractState::isAtFixpoint()
2440   bool isAtFixpoint() const override { return Assumed == Known; }
2441 
2442   /// See AbstractState::indicateOptimisticFixpoint(...)
2443   ChangeStatus indicateOptimisticFixpoint() override {
2444     Known = Assumed;
2445     return ChangeStatus::CHANGED;
2446   }
2447 
2448   /// See AbstractState::indicatePessimisticFixpoint(...)
2449   ChangeStatus indicatePessimisticFixpoint() override {
2450     Assumed = Known;
2451     return ChangeStatus::CHANGED;
2452   }
2453 
2454   /// Return the known state encoding
2455   ConstantRange getKnown() const { return Known; }
2456 
2457   /// Return the assumed state encoding.
2458   ConstantRange getAssumed() const { return Assumed; }
2459 
2460   /// Unite assumed range with the passed state.
2461   void unionAssumed(const ConstantRange &R) {
2462     // Don't loose a known range.
2463     Assumed = Assumed.unionWith(R).intersectWith(Known);
2464   }
2465 
2466   /// See IntegerRangeState::unionAssumed(..).
2467   void unionAssumed(const IntegerRangeState &R) {
2468     unionAssumed(R.getAssumed());
2469   }
2470 
2471   /// Unite known range with the passed state.
2472   void unionKnown(const ConstantRange &R) {
2473     // Don't loose a known range.
2474     Known = Known.unionWith(R);
2475     Assumed = Assumed.unionWith(Known);
2476   }
2477 
2478   /// See IntegerRangeState::unionKnown(..).
2479   void unionKnown(const IntegerRangeState &R) { unionKnown(R.getKnown()); }
2480 
2481   /// Intersect known range with the passed state.
2482   void intersectKnown(const ConstantRange &R) {
2483     Assumed = Assumed.intersectWith(R);
2484     Known = Known.intersectWith(R);
2485   }
2486 
2487   /// See IntegerRangeState::intersectKnown(..).
2488   void intersectKnown(const IntegerRangeState &R) {
2489     intersectKnown(R.getKnown());
2490   }
2491 
2492   /// Equality for IntegerRangeState.
2493   bool operator==(const IntegerRangeState &R) const {
2494     return getAssumed() == R.getAssumed() && getKnown() == R.getKnown();
2495   }
2496 
2497   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2498   /// intended that only information assumed in both states will be assumed in
2499   /// this one afterwards.
2500   IntegerRangeState operator^=(const IntegerRangeState &R) {
2501     // NOTE: `^=` operator seems like `intersect` but in this case, we need to
2502     // take `union`.
2503     unionAssumed(R);
2504     return *this;
2505   }
2506 
2507   IntegerRangeState operator&=(const IntegerRangeState &R) {
2508     // NOTE: `&=` operator seems like `intersect` but in this case, we need to
2509     // take `union`.
2510     unionKnown(R);
2511     unionAssumed(R);
2512     return *this;
2513   }
2514 };
2515 /// Helper struct necessary as the modular build fails if the virtual method
2516 /// IRAttribute::manifest is defined in the Attributor.cpp.
2517 struct IRAttributeManifest {
2518   static ChangeStatus manifestAttrs(Attributor &A, const IRPosition &IRP,
2519                                     const ArrayRef<Attribute> &DeducedAttrs,
2520                                     bool ForceReplace = false);
2521 };
2522 
2523 /// Helper to tie a abstract state implementation to an abstract attribute.
2524 template <typename StateTy, typename BaseType, class... Ts>
2525 struct StateWrapper : public BaseType, public StateTy {
2526   /// Provide static access to the type of the state.
2527   using StateType = StateTy;
2528 
2529   StateWrapper(const IRPosition &IRP, Ts... Args)
2530       : BaseType(IRP), StateTy(Args...) {}
2531 
2532   /// See AbstractAttribute::getState(...).
2533   StateType &getState() override { return *this; }
2534 
2535   /// See AbstractAttribute::getState(...).
2536   const StateType &getState() const override { return *this; }
2537 };
2538 
2539 /// Helper class that provides common functionality to manifest IR attributes.
2540 template <Attribute::AttrKind AK, typename BaseType>
2541 struct IRAttribute : public BaseType {
2542   IRAttribute(const IRPosition &IRP) : BaseType(IRP) {}
2543 
2544   /// See AbstractAttribute::initialize(...).
2545   virtual void initialize(Attributor &A) override {
2546     const IRPosition &IRP = this->getIRPosition();
2547     if (isa<UndefValue>(IRP.getAssociatedValue()) ||
2548         this->hasAttr(getAttrKind(), /* IgnoreSubsumingPositions */ false,
2549                       &A)) {
2550       this->getState().indicateOptimisticFixpoint();
2551       return;
2552     }
2553 
2554     bool IsFnInterface = IRP.isFnInterfaceKind();
2555     const Function *FnScope = IRP.getAnchorScope();
2556     // TODO: Not all attributes require an exact definition. Find a way to
2557     //       enable deduction for some but not all attributes in case the
2558     //       definition might be changed at runtime, see also
2559     //       http://lists.llvm.org/pipermail/llvm-dev/2018-February/121275.html.
2560     // TODO: We could always determine abstract attributes and if sufficient
2561     //       information was found we could duplicate the functions that do not
2562     //       have an exact definition.
2563     if (IsFnInterface && (!FnScope || !A.isFunctionIPOAmendable(*FnScope)))
2564       this->getState().indicatePessimisticFixpoint();
2565   }
2566 
2567   /// See AbstractAttribute::manifest(...).
2568   ChangeStatus manifest(Attributor &A) override {
2569     if (isa<UndefValue>(this->getIRPosition().getAssociatedValue()))
2570       return ChangeStatus::UNCHANGED;
2571     SmallVector<Attribute, 4> DeducedAttrs;
2572     getDeducedAttributes(this->getAnchorValue().getContext(), DeducedAttrs);
2573     return IRAttributeManifest::manifestAttrs(A, this->getIRPosition(),
2574                                               DeducedAttrs);
2575   }
2576 
2577   /// Return the kind that identifies the abstract attribute implementation.
2578   Attribute::AttrKind getAttrKind() const { return AK; }
2579 
2580   /// Return the deduced attributes in \p Attrs.
2581   virtual void getDeducedAttributes(LLVMContext &Ctx,
2582                                     SmallVectorImpl<Attribute> &Attrs) const {
2583     Attrs.emplace_back(Attribute::get(Ctx, getAttrKind()));
2584   }
2585 };
2586 
2587 /// Base struct for all "concrete attribute" deductions.
2588 ///
2589 /// The abstract attribute is a minimal interface that allows the Attributor to
2590 /// orchestrate the abstract/fixpoint analysis. The design allows to hide away
2591 /// implementation choices made for the subclasses but also to structure their
2592 /// implementation and simplify the use of other abstract attributes in-flight.
2593 ///
2594 /// To allow easy creation of new attributes, most methods have default
2595 /// implementations. The ones that do not are generally straight forward, except
2596 /// `AbstractAttribute::updateImpl` which is the location of most reasoning
2597 /// associated with the abstract attribute. The update is invoked by the
2598 /// Attributor in case the situation used to justify the current optimistic
2599 /// state might have changed. The Attributor determines this automatically
2600 /// by monitoring the `Attributor::getAAFor` calls made by abstract attributes.
2601 ///
2602 /// The `updateImpl` method should inspect the IR and other abstract attributes
2603 /// in-flight to justify the best possible (=optimistic) state. The actual
2604 /// implementation is, similar to the underlying abstract state encoding, not
2605 /// exposed. In the most common case, the `updateImpl` will go through a list of
2606 /// reasons why its optimistic state is valid given the current information. If
2607 /// any combination of them holds and is sufficient to justify the current
2608 /// optimistic state, the method shall return UNCHAGED. If not, the optimistic
2609 /// state is adjusted to the situation and the method shall return CHANGED.
2610 ///
2611 /// If the manifestation of the "concrete attribute" deduced by the subclass
2612 /// differs from the "default" behavior, which is a (set of) LLVM-IR
2613 /// attribute(s) for an argument, call site argument, function return value, or
2614 /// function, the `AbstractAttribute::manifest` method should be overloaded.
2615 ///
2616 /// NOTE: If the state obtained via getState() is INVALID, thus if
2617 ///       AbstractAttribute::getState().isValidState() returns false, no
2618 ///       information provided by the methods of this class should be used.
2619 /// NOTE: The Attributor currently has certain limitations to what we can do.
2620 ///       As a general rule of thumb, "concrete" abstract attributes should *for
2621 ///       now* only perform "backward" information propagation. That means
2622 ///       optimistic information obtained through abstract attributes should
2623 ///       only be used at positions that precede the origin of the information
2624 ///       with regards to the program flow. More practically, information can
2625 ///       *now* be propagated from instructions to their enclosing function, but
2626 ///       *not* from call sites to the called function. The mechanisms to allow
2627 ///       both directions will be added in the future.
2628 /// NOTE: The mechanics of adding a new "concrete" abstract attribute are
2629 ///       described in the file comment.
2630 struct AbstractAttribute : public IRPosition, public AADepGraphNode {
2631   using StateType = AbstractState;
2632 
2633   AbstractAttribute(const IRPosition &IRP) : IRPosition(IRP) {}
2634 
2635   /// Virtual destructor.
2636   virtual ~AbstractAttribute() {}
2637 
2638   /// This function is used to identify if an \p DGN is of type
2639   /// AbstractAttribute so that the dyn_cast and cast can use such information
2640   /// to cast an AADepGraphNode to an AbstractAttribute.
2641   ///
2642   /// We eagerly return true here because all AADepGraphNodes except for the
2643   /// Synthethis Node are of type AbstractAttribute
2644   static bool classof(const AADepGraphNode *DGN) { return true; }
2645 
2646   /// Initialize the state with the information in the Attributor \p A.
2647   ///
2648   /// This function is called by the Attributor once all abstract attributes
2649   /// have been identified. It can and shall be used for task like:
2650   ///  - identify existing knowledge in the IR and use it for the "known state"
2651   ///  - perform any work that is not going to change over time, e.g., determine
2652   ///    a subset of the IR, or attributes in-flight, that have to be looked at
2653   ///    in the `updateImpl` method.
2654   virtual void initialize(Attributor &A) {}
2655 
2656   /// Return the internal abstract state for inspection.
2657   virtual StateType &getState() = 0;
2658   virtual const StateType &getState() const = 0;
2659 
2660   /// Return an IR position, see struct IRPosition.
2661   const IRPosition &getIRPosition() const { return *this; };
2662   IRPosition &getIRPosition() { return *this; };
2663 
2664   /// Helper functions, for debug purposes only.
2665   ///{
2666   void print(raw_ostream &OS) const override;
2667   virtual void printWithDeps(raw_ostream &OS) const;
2668   void dump() const { print(dbgs()); }
2669 
2670   /// This function should return the "summarized" assumed state as string.
2671   virtual const std::string getAsStr() const = 0;
2672 
2673   /// This function should return the name of the AbstractAttribute
2674   virtual const std::string getName() const = 0;
2675 
2676   /// This function should return the address of the ID of the AbstractAttribute
2677   virtual const char *getIdAddr() const = 0;
2678   ///}
2679 
2680   /// Allow the Attributor access to the protected methods.
2681   friend struct Attributor;
2682 
2683 protected:
2684   /// Hook for the Attributor to trigger an update of the internal state.
2685   ///
2686   /// If this attribute is already fixed, this method will return UNCHANGED,
2687   /// otherwise it delegates to `AbstractAttribute::updateImpl`.
2688   ///
2689   /// \Return CHANGED if the internal state changed, otherwise UNCHANGED.
2690   ChangeStatus update(Attributor &A);
2691 
2692   /// Hook for the Attributor to trigger the manifestation of the information
2693   /// represented by the abstract attribute in the LLVM-IR.
2694   ///
2695   /// \Return CHANGED if the IR was altered, otherwise UNCHANGED.
2696   virtual ChangeStatus manifest(Attributor &A) {
2697     return ChangeStatus::UNCHANGED;
2698   }
2699 
2700   /// Hook to enable custom statistic tracking, called after manifest that
2701   /// resulted in a change if statistics are enabled.
2702   ///
2703   /// We require subclasses to provide an implementation so we remember to
2704   /// add statistics for them.
2705   virtual void trackStatistics() const = 0;
2706 
2707   /// The actual update/transfer function which has to be implemented by the
2708   /// derived classes.
2709   ///
2710   /// If it is called, the environment has changed and we have to determine if
2711   /// the current information is still valid or adjust it otherwise.
2712   ///
2713   /// \Return CHANGED if the internal state changed, otherwise UNCHANGED.
2714   virtual ChangeStatus updateImpl(Attributor &A) = 0;
2715 };
2716 
2717 /// Forward declarations of output streams for debug purposes.
2718 ///
2719 ///{
2720 raw_ostream &operator<<(raw_ostream &OS, const AbstractAttribute &AA);
2721 raw_ostream &operator<<(raw_ostream &OS, ChangeStatus S);
2722 raw_ostream &operator<<(raw_ostream &OS, IRPosition::Kind);
2723 raw_ostream &operator<<(raw_ostream &OS, const IRPosition &);
2724 raw_ostream &operator<<(raw_ostream &OS, const AbstractState &State);
2725 template <typename base_ty, base_ty BestState, base_ty WorstState>
2726 raw_ostream &
2727 operator<<(raw_ostream &OS,
2728            const IntegerStateBase<base_ty, BestState, WorstState> &S) {
2729   return OS << "(" << S.getKnown() << "-" << S.getAssumed() << ")"
2730             << static_cast<const AbstractState &>(S);
2731 }
2732 raw_ostream &operator<<(raw_ostream &OS, const IntegerRangeState &State);
2733 ///}
2734 
2735 struct AttributorPass : public PassInfoMixin<AttributorPass> {
2736   PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM);
2737 };
2738 struct AttributorCGSCCPass : public PassInfoMixin<AttributorCGSCCPass> {
2739   PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
2740                         LazyCallGraph &CG, CGSCCUpdateResult &UR);
2741 };
2742 
2743 Pass *createAttributorLegacyPass();
2744 Pass *createAttributorCGSCCLegacyPass();
2745 
2746 /// Helper function to clamp a state \p S of type \p StateType with the
2747 /// information in \p R and indicate/return if \p S did change (as-in update is
2748 /// required to be run again).
2749 template <typename StateType>
2750 ChangeStatus clampStateAndIndicateChange(StateType &S, const StateType &R) {
2751   auto Assumed = S.getAssumed();
2752   S ^= R;
2753   return Assumed == S.getAssumed() ? ChangeStatus::UNCHANGED
2754                                    : ChangeStatus::CHANGED;
2755 }
2756 
2757 /// ----------------------------------------------------------------------------
2758 ///                       Abstract Attribute Classes
2759 /// ----------------------------------------------------------------------------
2760 
2761 /// An abstract attribute for the returned values of a function.
2762 struct AAReturnedValues
2763     : public IRAttribute<Attribute::Returned, AbstractAttribute> {
2764   AAReturnedValues(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2765 
2766   /// Return an assumed unique return value if a single candidate is found. If
2767   /// there cannot be one, return a nullptr. If it is not clear yet, return the
2768   /// Optional::NoneType.
2769   Optional<Value *> getAssumedUniqueReturnValue(Attributor &A) const;
2770 
2771   /// Check \p Pred on all returned values.
2772   ///
2773   /// This method will evaluate \p Pred on returned values and return
2774   /// true if (1) all returned values are known, and (2) \p Pred returned true
2775   /// for all returned values.
2776   ///
2777   /// Note: Unlike the Attributor::checkForAllReturnedValuesAndReturnInsts
2778   /// method, this one will not filter dead return instructions.
2779   virtual bool checkForAllReturnedValuesAndReturnInsts(
2780       function_ref<bool(Value &, const SmallSetVector<ReturnInst *, 4> &)> Pred)
2781       const = 0;
2782 
2783   using iterator =
2784       MapVector<Value *, SmallSetVector<ReturnInst *, 4>>::iterator;
2785   using const_iterator =
2786       MapVector<Value *, SmallSetVector<ReturnInst *, 4>>::const_iterator;
2787   virtual llvm::iterator_range<iterator> returned_values() = 0;
2788   virtual llvm::iterator_range<const_iterator> returned_values() const = 0;
2789 
2790   virtual size_t getNumReturnValues() const = 0;
2791 
2792   /// Create an abstract attribute view for the position \p IRP.
2793   static AAReturnedValues &createForPosition(const IRPosition &IRP,
2794                                              Attributor &A);
2795 
2796   /// See AbstractAttribute::getName()
2797   const std::string getName() const override { return "AAReturnedValues"; }
2798 
2799   /// See AbstractAttribute::getIdAddr()
2800   const char *getIdAddr() const override { return &ID; }
2801 
2802   /// This function should return true if the type of the \p AA is
2803   /// AAReturnedValues
2804   static bool classof(const AbstractAttribute *AA) {
2805     return (AA->getIdAddr() == &ID);
2806   }
2807 
2808   /// Unique ID (due to the unique address)
2809   static const char ID;
2810 };
2811 
2812 struct AANoUnwind
2813     : public IRAttribute<Attribute::NoUnwind,
2814                          StateWrapper<BooleanState, AbstractAttribute>> {
2815   AANoUnwind(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2816 
2817   /// Returns true if nounwind is assumed.
2818   bool isAssumedNoUnwind() const { return getAssumed(); }
2819 
2820   /// Returns true if nounwind is known.
2821   bool isKnownNoUnwind() const { return getKnown(); }
2822 
2823   /// Create an abstract attribute view for the position \p IRP.
2824   static AANoUnwind &createForPosition(const IRPosition &IRP, Attributor &A);
2825 
2826   /// See AbstractAttribute::getName()
2827   const std::string getName() const override { return "AANoUnwind"; }
2828 
2829   /// See AbstractAttribute::getIdAddr()
2830   const char *getIdAddr() const override { return &ID; }
2831 
2832   /// This function should return true if the type of the \p AA is AANoUnwind
2833   static bool classof(const AbstractAttribute *AA) {
2834     return (AA->getIdAddr() == &ID);
2835   }
2836 
2837   /// Unique ID (due to the unique address)
2838   static const char ID;
2839 };
2840 
2841 struct AANoSync
2842     : public IRAttribute<Attribute::NoSync,
2843                          StateWrapper<BooleanState, AbstractAttribute>> {
2844   AANoSync(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2845 
2846   /// Returns true if "nosync" is assumed.
2847   bool isAssumedNoSync() const { return getAssumed(); }
2848 
2849   /// Returns true if "nosync" is known.
2850   bool isKnownNoSync() const { return getKnown(); }
2851 
2852   /// Create an abstract attribute view for the position \p IRP.
2853   static AANoSync &createForPosition(const IRPosition &IRP, Attributor &A);
2854 
2855   /// See AbstractAttribute::getName()
2856   const std::string getName() const override { return "AANoSync"; }
2857 
2858   /// See AbstractAttribute::getIdAddr()
2859   const char *getIdAddr() const override { return &ID; }
2860 
2861   /// This function should return true if the type of the \p AA is AANoSync
2862   static bool classof(const AbstractAttribute *AA) {
2863     return (AA->getIdAddr() == &ID);
2864   }
2865 
2866   /// Unique ID (due to the unique address)
2867   static const char ID;
2868 };
2869 
2870 /// An abstract interface for all nonnull attributes.
2871 struct AANonNull
2872     : public IRAttribute<Attribute::NonNull,
2873                          StateWrapper<BooleanState, AbstractAttribute>> {
2874   AANonNull(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2875 
2876   /// Return true if we assume that the underlying value is nonnull.
2877   bool isAssumedNonNull() const { return getAssumed(); }
2878 
2879   /// Return true if we know that underlying value is nonnull.
2880   bool isKnownNonNull() const { return getKnown(); }
2881 
2882   /// Create an abstract attribute view for the position \p IRP.
2883   static AANonNull &createForPosition(const IRPosition &IRP, Attributor &A);
2884 
2885   /// See AbstractAttribute::getName()
2886   const std::string getName() const override { return "AANonNull"; }
2887 
2888   /// See AbstractAttribute::getIdAddr()
2889   const char *getIdAddr() const override { return &ID; }
2890 
2891   /// This function should return true if the type of the \p AA is AANonNull
2892   static bool classof(const AbstractAttribute *AA) {
2893     return (AA->getIdAddr() == &ID);
2894   }
2895 
2896   /// Unique ID (due to the unique address)
2897   static const char ID;
2898 };
2899 
2900 /// An abstract attribute for norecurse.
2901 struct AANoRecurse
2902     : public IRAttribute<Attribute::NoRecurse,
2903                          StateWrapper<BooleanState, AbstractAttribute>> {
2904   AANoRecurse(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2905 
2906   /// Return true if "norecurse" is assumed.
2907   bool isAssumedNoRecurse() const { return getAssumed(); }
2908 
2909   /// Return true if "norecurse" is known.
2910   bool isKnownNoRecurse() const { return getKnown(); }
2911 
2912   /// Create an abstract attribute view for the position \p IRP.
2913   static AANoRecurse &createForPosition(const IRPosition &IRP, Attributor &A);
2914 
2915   /// See AbstractAttribute::getName()
2916   const std::string getName() const override { return "AANoRecurse"; }
2917 
2918   /// See AbstractAttribute::getIdAddr()
2919   const char *getIdAddr() const override { return &ID; }
2920 
2921   /// This function should return true if the type of the \p AA is AANoRecurse
2922   static bool classof(const AbstractAttribute *AA) {
2923     return (AA->getIdAddr() == &ID);
2924   }
2925 
2926   /// Unique ID (due to the unique address)
2927   static const char ID;
2928 };
2929 
2930 /// An abstract attribute for willreturn.
2931 struct AAWillReturn
2932     : public IRAttribute<Attribute::WillReturn,
2933                          StateWrapper<BooleanState, AbstractAttribute>> {
2934   AAWillReturn(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
2935 
2936   /// Return true if "willreturn" is assumed.
2937   bool isAssumedWillReturn() const { return getAssumed(); }
2938 
2939   /// Return true if "willreturn" is known.
2940   bool isKnownWillReturn() const { return getKnown(); }
2941 
2942   /// Create an abstract attribute view for the position \p IRP.
2943   static AAWillReturn &createForPosition(const IRPosition &IRP, Attributor &A);
2944 
2945   /// See AbstractAttribute::getName()
2946   const std::string getName() const override { return "AAWillReturn"; }
2947 
2948   /// See AbstractAttribute::getIdAddr()
2949   const char *getIdAddr() const override { return &ID; }
2950 
2951   /// This function should return true if the type of the \p AA is AAWillReturn
2952   static bool classof(const AbstractAttribute *AA) {
2953     return (AA->getIdAddr() == &ID);
2954   }
2955 
2956   /// Unique ID (due to the unique address)
2957   static const char ID;
2958 };
2959 
2960 /// An abstract attribute for undefined behavior.
2961 struct AAUndefinedBehavior
2962     : public StateWrapper<BooleanState, AbstractAttribute> {
2963   using Base = StateWrapper<BooleanState, AbstractAttribute>;
2964   AAUndefinedBehavior(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
2965 
2966   /// Return true if "undefined behavior" is assumed.
2967   bool isAssumedToCauseUB() const { return getAssumed(); }
2968 
2969   /// Return true if "undefined behavior" is assumed for a specific instruction.
2970   virtual bool isAssumedToCauseUB(Instruction *I) const = 0;
2971 
2972   /// Return true if "undefined behavior" is known.
2973   bool isKnownToCauseUB() const { return getKnown(); }
2974 
2975   /// Return true if "undefined behavior" is known for a specific instruction.
2976   virtual bool isKnownToCauseUB(Instruction *I) const = 0;
2977 
2978   /// Create an abstract attribute view for the position \p IRP.
2979   static AAUndefinedBehavior &createForPosition(const IRPosition &IRP,
2980                                                 Attributor &A);
2981 
2982   /// See AbstractAttribute::getName()
2983   const std::string getName() const override { return "AAUndefinedBehavior"; }
2984 
2985   /// See AbstractAttribute::getIdAddr()
2986   const char *getIdAddr() const override { return &ID; }
2987 
2988   /// This function should return true if the type of the \p AA is
2989   /// AAUndefineBehavior
2990   static bool classof(const AbstractAttribute *AA) {
2991     return (AA->getIdAddr() == &ID);
2992   }
2993 
2994   /// Unique ID (due to the unique address)
2995   static const char ID;
2996 };
2997 
2998 /// An abstract interface to determine reachability of point A to B.
2999 struct AAReachability : public StateWrapper<BooleanState, AbstractAttribute> {
3000   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3001   AAReachability(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3002 
3003   /// Returns true if 'From' instruction is assumed to reach, 'To' instruction.
3004   /// Users should provide two positions they are interested in, and the class
3005   /// determines (and caches) reachability.
3006   bool isAssumedReachable(Attributor &A, const Instruction &From,
3007                           const Instruction &To) const {
3008     if (!getState().isValidState())
3009       return true;
3010     return A.getInfoCache().getPotentiallyReachable(From, To);
3011   }
3012 
3013   /// Returns true if 'From' instruction is known to reach, 'To' instruction.
3014   /// Users should provide two positions they are interested in, and the class
3015   /// determines (and caches) reachability.
3016   bool isKnownReachable(Attributor &A, const Instruction &From,
3017                         const Instruction &To) const {
3018     if (!getState().isValidState())
3019       return false;
3020     return A.getInfoCache().getPotentiallyReachable(From, To);
3021   }
3022 
3023   /// Create an abstract attribute view for the position \p IRP.
3024   static AAReachability &createForPosition(const IRPosition &IRP,
3025                                            Attributor &A);
3026 
3027   /// See AbstractAttribute::getName()
3028   const std::string getName() const override { return "AAReachability"; }
3029 
3030   /// See AbstractAttribute::getIdAddr()
3031   const char *getIdAddr() const override { return &ID; }
3032 
3033   /// This function should return true if the type of the \p AA is
3034   /// AAReachability
3035   static bool classof(const AbstractAttribute *AA) {
3036     return (AA->getIdAddr() == &ID);
3037   }
3038 
3039   /// Unique ID (due to the unique address)
3040   static const char ID;
3041 };
3042 
3043 /// An abstract interface for all noalias attributes.
3044 struct AANoAlias
3045     : public IRAttribute<Attribute::NoAlias,
3046                          StateWrapper<BooleanState, AbstractAttribute>> {
3047   AANoAlias(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3048 
3049   /// Return true if we assume that the underlying value is alias.
3050   bool isAssumedNoAlias() const { return getAssumed(); }
3051 
3052   /// Return true if we know that underlying value is noalias.
3053   bool isKnownNoAlias() const { return getKnown(); }
3054 
3055   /// Create an abstract attribute view for the position \p IRP.
3056   static AANoAlias &createForPosition(const IRPosition &IRP, Attributor &A);
3057 
3058   /// See AbstractAttribute::getName()
3059   const std::string getName() const override { return "AANoAlias"; }
3060 
3061   /// See AbstractAttribute::getIdAddr()
3062   const char *getIdAddr() const override { return &ID; }
3063 
3064   /// This function should return true if the type of the \p AA is AANoAlias
3065   static bool classof(const AbstractAttribute *AA) {
3066     return (AA->getIdAddr() == &ID);
3067   }
3068 
3069   /// Unique ID (due to the unique address)
3070   static const char ID;
3071 };
3072 
3073 /// An AbstractAttribute for nofree.
3074 struct AANoFree
3075     : public IRAttribute<Attribute::NoFree,
3076                          StateWrapper<BooleanState, AbstractAttribute>> {
3077   AANoFree(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3078 
3079   /// Return true if "nofree" is assumed.
3080   bool isAssumedNoFree() const { return getAssumed(); }
3081 
3082   /// Return true if "nofree" is known.
3083   bool isKnownNoFree() const { return getKnown(); }
3084 
3085   /// Create an abstract attribute view for the position \p IRP.
3086   static AANoFree &createForPosition(const IRPosition &IRP, Attributor &A);
3087 
3088   /// See AbstractAttribute::getName()
3089   const std::string getName() const override { return "AANoFree"; }
3090 
3091   /// See AbstractAttribute::getIdAddr()
3092   const char *getIdAddr() const override { return &ID; }
3093 
3094   /// This function should return true if the type of the \p AA is AANoFree
3095   static bool classof(const AbstractAttribute *AA) {
3096     return (AA->getIdAddr() == &ID);
3097   }
3098 
3099   /// Unique ID (due to the unique address)
3100   static const char ID;
3101 };
3102 
3103 /// An AbstractAttribute for noreturn.
3104 struct AANoReturn
3105     : public IRAttribute<Attribute::NoReturn,
3106                          StateWrapper<BooleanState, AbstractAttribute>> {
3107   AANoReturn(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3108 
3109   /// Return true if the underlying object is assumed to never return.
3110   bool isAssumedNoReturn() const { return getAssumed(); }
3111 
3112   /// Return true if the underlying object is known to never return.
3113   bool isKnownNoReturn() const { return getKnown(); }
3114 
3115   /// Create an abstract attribute view for the position \p IRP.
3116   static AANoReturn &createForPosition(const IRPosition &IRP, Attributor &A);
3117 
3118   /// See AbstractAttribute::getName()
3119   const std::string getName() const override { return "AANoReturn"; }
3120 
3121   /// See AbstractAttribute::getIdAddr()
3122   const char *getIdAddr() const override { return &ID; }
3123 
3124   /// This function should return true if the type of the \p AA is AANoReturn
3125   static bool classof(const AbstractAttribute *AA) {
3126     return (AA->getIdAddr() == &ID);
3127   }
3128 
3129   /// Unique ID (due to the unique address)
3130   static const char ID;
3131 };
3132 
3133 /// An abstract interface for liveness abstract attribute.
3134 struct AAIsDead
3135     : public StateWrapper<BitIntegerState<uint8_t, 3, 0>, AbstractAttribute> {
3136   using Base = StateWrapper<BitIntegerState<uint8_t, 3, 0>, AbstractAttribute>;
3137   AAIsDead(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3138 
3139   /// State encoding bits. A set bit in the state means the property holds.
3140   enum {
3141     HAS_NO_EFFECT = 1 << 0,
3142     IS_REMOVABLE = 1 << 1,
3143 
3144     IS_DEAD = HAS_NO_EFFECT | IS_REMOVABLE,
3145   };
3146   static_assert(IS_DEAD == getBestState(), "Unexpected BEST_STATE value");
3147 
3148 protected:
3149   /// The query functions are protected such that other attributes need to go
3150   /// through the Attributor interfaces: `Attributor::isAssumedDead(...)`
3151 
3152   /// Returns true if the underlying value is assumed dead.
3153   virtual bool isAssumedDead() const = 0;
3154 
3155   /// Returns true if the underlying value is known dead.
3156   virtual bool isKnownDead() const = 0;
3157 
3158   /// Returns true if \p BB is assumed dead.
3159   virtual bool isAssumedDead(const BasicBlock *BB) const = 0;
3160 
3161   /// Returns true if \p BB is known dead.
3162   virtual bool isKnownDead(const BasicBlock *BB) const = 0;
3163 
3164   /// Returns true if \p I is assumed dead.
3165   virtual bool isAssumedDead(const Instruction *I) const = 0;
3166 
3167   /// Returns true if \p I is known dead.
3168   virtual bool isKnownDead(const Instruction *I) const = 0;
3169 
3170   /// This method is used to check if at least one instruction in a collection
3171   /// of instructions is live.
3172   template <typename T> bool isLiveInstSet(T begin, T end) const {
3173     for (const auto &I : llvm::make_range(begin, end)) {
3174       assert(I->getFunction() == getIRPosition().getAssociatedFunction() &&
3175              "Instruction must be in the same anchor scope function.");
3176 
3177       if (!isAssumedDead(I))
3178         return true;
3179     }
3180 
3181     return false;
3182   }
3183 
3184 public:
3185   /// Create an abstract attribute view for the position \p IRP.
3186   static AAIsDead &createForPosition(const IRPosition &IRP, Attributor &A);
3187 
3188   /// Determine if \p F might catch asynchronous exceptions.
3189   static bool mayCatchAsynchronousExceptions(const Function &F) {
3190     return F.hasPersonalityFn() && !canSimplifyInvokeNoUnwind(&F);
3191   }
3192 
3193   /// Return if the edge from \p From BB to \p To BB is assumed dead.
3194   /// This is specifically useful in AAReachability.
3195   virtual bool isEdgeDead(const BasicBlock *From, const BasicBlock *To) const {
3196     return false;
3197   }
3198 
3199   /// See AbstractAttribute::getName()
3200   const std::string getName() const override { return "AAIsDead"; }
3201 
3202   /// See AbstractAttribute::getIdAddr()
3203   const char *getIdAddr() const override { return &ID; }
3204 
3205   /// This function should return true if the type of the \p AA is AAIsDead
3206   static bool classof(const AbstractAttribute *AA) {
3207     return (AA->getIdAddr() == &ID);
3208   }
3209 
3210   /// Unique ID (due to the unique address)
3211   static const char ID;
3212 
3213   friend struct Attributor;
3214 };
3215 
3216 /// State for dereferenceable attribute
3217 struct DerefState : AbstractState {
3218 
3219   static DerefState getBestState() { return DerefState(); }
3220   static DerefState getBestState(const DerefState &) { return getBestState(); }
3221 
3222   /// Return the worst possible representable state.
3223   static DerefState getWorstState() {
3224     DerefState DS;
3225     DS.indicatePessimisticFixpoint();
3226     return DS;
3227   }
3228   static DerefState getWorstState(const DerefState &) {
3229     return getWorstState();
3230   }
3231 
3232   /// State representing for dereferenceable bytes.
3233   IncIntegerState<> DerefBytesState;
3234 
3235   /// Map representing for accessed memory offsets and sizes.
3236   /// A key is Offset and a value is size.
3237   /// If there is a load/store instruction something like,
3238   ///   p[offset] = v;
3239   /// (offset, sizeof(v)) will be inserted to this map.
3240   /// std::map is used because we want to iterate keys in ascending order.
3241   std::map<int64_t, uint64_t> AccessedBytesMap;
3242 
3243   /// Helper function to calculate dereferenceable bytes from current known
3244   /// bytes and accessed bytes.
3245   ///
3246   /// int f(int *A){
3247   ///    *A = 0;
3248   ///    *(A+2) = 2;
3249   ///    *(A+1) = 1;
3250   ///    *(A+10) = 10;
3251   /// }
3252   /// ```
3253   /// In that case, AccessedBytesMap is `{0:4, 4:4, 8:4, 40:4}`.
3254   /// AccessedBytesMap is std::map so it is iterated in accending order on
3255   /// key(Offset). So KnownBytes will be updated like this:
3256   ///
3257   /// |Access | KnownBytes
3258   /// |(0, 4)| 0 -> 4
3259   /// |(4, 4)| 4 -> 8
3260   /// |(8, 4)| 8 -> 12
3261   /// |(40, 4) | 12 (break)
3262   void computeKnownDerefBytesFromAccessedMap() {
3263     int64_t KnownBytes = DerefBytesState.getKnown();
3264     for (auto &Access : AccessedBytesMap) {
3265       if (KnownBytes < Access.first)
3266         break;
3267       KnownBytes = std::max(KnownBytes, Access.first + (int64_t)Access.second);
3268     }
3269 
3270     DerefBytesState.takeKnownMaximum(KnownBytes);
3271   }
3272 
3273   /// State representing that whether the value is globaly dereferenceable.
3274   BooleanState GlobalState;
3275 
3276   /// See AbstractState::isValidState()
3277   bool isValidState() const override { return DerefBytesState.isValidState(); }
3278 
3279   /// See AbstractState::isAtFixpoint()
3280   bool isAtFixpoint() const override {
3281     return !isValidState() ||
3282            (DerefBytesState.isAtFixpoint() && GlobalState.isAtFixpoint());
3283   }
3284 
3285   /// See AbstractState::indicateOptimisticFixpoint(...)
3286   ChangeStatus indicateOptimisticFixpoint() override {
3287     DerefBytesState.indicateOptimisticFixpoint();
3288     GlobalState.indicateOptimisticFixpoint();
3289     return ChangeStatus::UNCHANGED;
3290   }
3291 
3292   /// See AbstractState::indicatePessimisticFixpoint(...)
3293   ChangeStatus indicatePessimisticFixpoint() override {
3294     DerefBytesState.indicatePessimisticFixpoint();
3295     GlobalState.indicatePessimisticFixpoint();
3296     return ChangeStatus::CHANGED;
3297   }
3298 
3299   /// Update known dereferenceable bytes.
3300   void takeKnownDerefBytesMaximum(uint64_t Bytes) {
3301     DerefBytesState.takeKnownMaximum(Bytes);
3302 
3303     // Known bytes might increase.
3304     computeKnownDerefBytesFromAccessedMap();
3305   }
3306 
3307   /// Update assumed dereferenceable bytes.
3308   void takeAssumedDerefBytesMinimum(uint64_t Bytes) {
3309     DerefBytesState.takeAssumedMinimum(Bytes);
3310   }
3311 
3312   /// Add accessed bytes to the map.
3313   void addAccessedBytes(int64_t Offset, uint64_t Size) {
3314     uint64_t &AccessedBytes = AccessedBytesMap[Offset];
3315     AccessedBytes = std::max(AccessedBytes, Size);
3316 
3317     // Known bytes might increase.
3318     computeKnownDerefBytesFromAccessedMap();
3319   }
3320 
3321   /// Equality for DerefState.
3322   bool operator==(const DerefState &R) const {
3323     return this->DerefBytesState == R.DerefBytesState &&
3324            this->GlobalState == R.GlobalState;
3325   }
3326 
3327   /// Inequality for DerefState.
3328   bool operator!=(const DerefState &R) const { return !(*this == R); }
3329 
3330   /// See IntegerStateBase::operator^=
3331   DerefState operator^=(const DerefState &R) {
3332     DerefBytesState ^= R.DerefBytesState;
3333     GlobalState ^= R.GlobalState;
3334     return *this;
3335   }
3336 
3337   /// See IntegerStateBase::operator+=
3338   DerefState operator+=(const DerefState &R) {
3339     DerefBytesState += R.DerefBytesState;
3340     GlobalState += R.GlobalState;
3341     return *this;
3342   }
3343 
3344   /// See IntegerStateBase::operator&=
3345   DerefState operator&=(const DerefState &R) {
3346     DerefBytesState &= R.DerefBytesState;
3347     GlobalState &= R.GlobalState;
3348     return *this;
3349   }
3350 
3351   /// See IntegerStateBase::operator|=
3352   DerefState operator|=(const DerefState &R) {
3353     DerefBytesState |= R.DerefBytesState;
3354     GlobalState |= R.GlobalState;
3355     return *this;
3356   }
3357 
3358 protected:
3359   const AANonNull *NonNullAA = nullptr;
3360 };
3361 
3362 /// An abstract interface for all dereferenceable attribute.
3363 struct AADereferenceable
3364     : public IRAttribute<Attribute::Dereferenceable,
3365                          StateWrapper<DerefState, AbstractAttribute>> {
3366   AADereferenceable(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3367 
3368   /// Return true if we assume that the underlying value is nonnull.
3369   bool isAssumedNonNull() const {
3370     return NonNullAA && NonNullAA->isAssumedNonNull();
3371   }
3372 
3373   /// Return true if we know that the underlying value is nonnull.
3374   bool isKnownNonNull() const {
3375     return NonNullAA && NonNullAA->isKnownNonNull();
3376   }
3377 
3378   /// Return true if we assume that underlying value is
3379   /// dereferenceable(_or_null) globally.
3380   bool isAssumedGlobal() const { return GlobalState.getAssumed(); }
3381 
3382   /// Return true if we know that underlying value is
3383   /// dereferenceable(_or_null) globally.
3384   bool isKnownGlobal() const { return GlobalState.getKnown(); }
3385 
3386   /// Return assumed dereferenceable bytes.
3387   uint32_t getAssumedDereferenceableBytes() const {
3388     return DerefBytesState.getAssumed();
3389   }
3390 
3391   /// Return known dereferenceable bytes.
3392   uint32_t getKnownDereferenceableBytes() const {
3393     return DerefBytesState.getKnown();
3394   }
3395 
3396   /// Create an abstract attribute view for the position \p IRP.
3397   static AADereferenceable &createForPosition(const IRPosition &IRP,
3398                                               Attributor &A);
3399 
3400   /// See AbstractAttribute::getName()
3401   const std::string getName() const override { return "AADereferenceable"; }
3402 
3403   /// See AbstractAttribute::getIdAddr()
3404   const char *getIdAddr() const override { return &ID; }
3405 
3406   /// This function should return true if the type of the \p AA is
3407   /// AADereferenceable
3408   static bool classof(const AbstractAttribute *AA) {
3409     return (AA->getIdAddr() == &ID);
3410   }
3411 
3412   /// Unique ID (due to the unique address)
3413   static const char ID;
3414 };
3415 
3416 using AAAlignmentStateType =
3417     IncIntegerState<uint32_t, Value::MaximumAlignment, 1>;
3418 /// An abstract interface for all align attributes.
3419 struct AAAlign : public IRAttribute<
3420                      Attribute::Alignment,
3421                      StateWrapper<AAAlignmentStateType, AbstractAttribute>> {
3422   AAAlign(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3423 
3424   /// Return assumed alignment.
3425   unsigned getAssumedAlign() const { return getAssumed(); }
3426 
3427   /// Return known alignment.
3428   unsigned getKnownAlign() const { return getKnown(); }
3429 
3430   /// See AbstractAttribute::getName()
3431   const std::string getName() const override { return "AAAlign"; }
3432 
3433   /// See AbstractAttribute::getIdAddr()
3434   const char *getIdAddr() const override { return &ID; }
3435 
3436   /// This function should return true if the type of the \p AA is AAAlign
3437   static bool classof(const AbstractAttribute *AA) {
3438     return (AA->getIdAddr() == &ID);
3439   }
3440 
3441   /// Create an abstract attribute view for the position \p IRP.
3442   static AAAlign &createForPosition(const IRPosition &IRP, Attributor &A);
3443 
3444   /// Unique ID (due to the unique address)
3445   static const char ID;
3446 };
3447 
3448 /// An abstract interface for all nocapture attributes.
3449 struct AANoCapture
3450     : public IRAttribute<
3451           Attribute::NoCapture,
3452           StateWrapper<BitIntegerState<uint16_t, 7, 0>, AbstractAttribute>> {
3453   AANoCapture(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3454 
3455   /// State encoding bits. A set bit in the state means the property holds.
3456   /// NO_CAPTURE is the best possible state, 0 the worst possible state.
3457   enum {
3458     NOT_CAPTURED_IN_MEM = 1 << 0,
3459     NOT_CAPTURED_IN_INT = 1 << 1,
3460     NOT_CAPTURED_IN_RET = 1 << 2,
3461 
3462     /// If we do not capture the value in memory or through integers we can only
3463     /// communicate it back as a derived pointer.
3464     NO_CAPTURE_MAYBE_RETURNED = NOT_CAPTURED_IN_MEM | NOT_CAPTURED_IN_INT,
3465 
3466     /// If we do not capture the value in memory, through integers, or as a
3467     /// derived pointer we know it is not captured.
3468     NO_CAPTURE =
3469         NOT_CAPTURED_IN_MEM | NOT_CAPTURED_IN_INT | NOT_CAPTURED_IN_RET,
3470   };
3471 
3472   /// Return true if we know that the underlying value is not captured in its
3473   /// respective scope.
3474   bool isKnownNoCapture() const { return isKnown(NO_CAPTURE); }
3475 
3476   /// Return true if we assume that the underlying value is not captured in its
3477   /// respective scope.
3478   bool isAssumedNoCapture() const { return isAssumed(NO_CAPTURE); }
3479 
3480   /// Return true if we know that the underlying value is not captured in its
3481   /// respective scope but we allow it to escape through a "return".
3482   bool isKnownNoCaptureMaybeReturned() const {
3483     return isKnown(NO_CAPTURE_MAYBE_RETURNED);
3484   }
3485 
3486   /// Return true if we assume that the underlying value is not captured in its
3487   /// respective scope but we allow it to escape through a "return".
3488   bool isAssumedNoCaptureMaybeReturned() const {
3489     return isAssumed(NO_CAPTURE_MAYBE_RETURNED);
3490   }
3491 
3492   /// Create an abstract attribute view for the position \p IRP.
3493   static AANoCapture &createForPosition(const IRPosition &IRP, Attributor &A);
3494 
3495   /// See AbstractAttribute::getName()
3496   const std::string getName() const override { return "AANoCapture"; }
3497 
3498   /// See AbstractAttribute::getIdAddr()
3499   const char *getIdAddr() const override { return &ID; }
3500 
3501   /// This function should return true if the type of the \p AA is AANoCapture
3502   static bool classof(const AbstractAttribute *AA) {
3503     return (AA->getIdAddr() == &ID);
3504   }
3505 
3506   /// Unique ID (due to the unique address)
3507   static const char ID;
3508 };
3509 
3510 struct ValueSimplifyStateType : public AbstractState {
3511 
3512   ValueSimplifyStateType(Type *Ty) : Ty(Ty) {}
3513 
3514   static ValueSimplifyStateType getBestState(Type *Ty) {
3515     return ValueSimplifyStateType(Ty);
3516   }
3517   static ValueSimplifyStateType getBestState(const ValueSimplifyStateType &VS) {
3518     return getBestState(VS.Ty);
3519   }
3520 
3521   /// Return the worst possible representable state.
3522   static ValueSimplifyStateType getWorstState(Type *Ty) {
3523     ValueSimplifyStateType DS(Ty);
3524     DS.indicatePessimisticFixpoint();
3525     return DS;
3526   }
3527   static ValueSimplifyStateType
3528   getWorstState(const ValueSimplifyStateType &VS) {
3529     return getWorstState(VS.Ty);
3530   }
3531 
3532   /// See AbstractState::isValidState(...)
3533   bool isValidState() const override { return BS.isValidState(); }
3534 
3535   /// See AbstractState::isAtFixpoint(...)
3536   bool isAtFixpoint() const override { return BS.isAtFixpoint(); }
3537 
3538   /// Return the assumed state encoding.
3539   ValueSimplifyStateType getAssumed() { return *this; }
3540   const ValueSimplifyStateType &getAssumed() const { return *this; }
3541 
3542   /// See AbstractState::indicatePessimisticFixpoint(...)
3543   ChangeStatus indicatePessimisticFixpoint() override {
3544     return BS.indicatePessimisticFixpoint();
3545   }
3546 
3547   /// See AbstractState::indicateOptimisticFixpoint(...)
3548   ChangeStatus indicateOptimisticFixpoint() override {
3549     return BS.indicateOptimisticFixpoint();
3550   }
3551 
3552   /// "Clamp" this state with \p PVS.
3553   ValueSimplifyStateType operator^=(const ValueSimplifyStateType &VS) {
3554     BS ^= VS.BS;
3555     unionAssumed(VS.SimplifiedAssociatedValue);
3556     return *this;
3557   }
3558 
3559   bool operator==(const ValueSimplifyStateType &RHS) const {
3560     if (isValidState() != RHS.isValidState())
3561       return false;
3562     if (!isValidState() && !RHS.isValidState())
3563       return true;
3564     return SimplifiedAssociatedValue == RHS.SimplifiedAssociatedValue;
3565   }
3566 
3567 protected:
3568   /// The type of the original value.
3569   Type *Ty;
3570 
3571   /// Merge \p Other into the currently assumed simplified value
3572   bool unionAssumed(Optional<Value *> Other);
3573 
3574   /// Helper to track validity and fixpoint
3575   BooleanState BS;
3576 
3577   /// An assumed simplified value. Initially, it is set to Optional::None, which
3578   /// means that the value is not clear under current assumption. If in the
3579   /// pessimistic state, getAssumedSimplifiedValue doesn't return this value but
3580   /// returns orignal associated value.
3581   Optional<Value *> SimplifiedAssociatedValue;
3582 };
3583 
3584 /// An abstract interface for value simplify abstract attribute.
3585 struct AAValueSimplify
3586     : public StateWrapper<ValueSimplifyStateType, AbstractAttribute, Type *> {
3587   using Base = StateWrapper<ValueSimplifyStateType, AbstractAttribute, Type *>;
3588   AAValueSimplify(const IRPosition &IRP, Attributor &A)
3589       : Base(IRP, IRP.getAssociatedType()) {}
3590 
3591   /// Create an abstract attribute view for the position \p IRP.
3592   static AAValueSimplify &createForPosition(const IRPosition &IRP,
3593                                             Attributor &A);
3594 
3595   /// See AbstractAttribute::getName()
3596   const std::string getName() const override { return "AAValueSimplify"; }
3597 
3598   /// See AbstractAttribute::getIdAddr()
3599   const char *getIdAddr() const override { return &ID; }
3600 
3601   /// This function should return true if the type of the \p AA is
3602   /// AAValueSimplify
3603   static bool classof(const AbstractAttribute *AA) {
3604     return (AA->getIdAddr() == &ID);
3605   }
3606 
3607   /// Unique ID (due to the unique address)
3608   static const char ID;
3609 
3610 private:
3611   /// Return an assumed simplified value if a single candidate is found. If
3612   /// there cannot be one, return original value. If it is not clear yet, return
3613   /// the Optional::NoneType.
3614   ///
3615   /// Use `Attributor::getAssumedSimplified` for value simplification.
3616   virtual Optional<Value *> getAssumedSimplifiedValue(Attributor &A) const = 0;
3617 
3618   friend struct Attributor;
3619 };
3620 
3621 struct AAHeapToStack : public StateWrapper<BooleanState, AbstractAttribute> {
3622   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3623   AAHeapToStack(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3624 
3625   /// Returns true if HeapToStack conversion is assumed to be possible.
3626   virtual bool isAssumedHeapToStack(const CallBase &CB) const = 0;
3627 
3628   /// Returns true if HeapToStack conversion is assumed and the CB is a
3629   /// callsite to a free operation to be removed.
3630   virtual bool isAssumedHeapToStackRemovedFree(CallBase &CB) const = 0;
3631 
3632   /// Create an abstract attribute view for the position \p IRP.
3633   static AAHeapToStack &createForPosition(const IRPosition &IRP, Attributor &A);
3634 
3635   /// See AbstractAttribute::getName()
3636   const std::string getName() const override { return "AAHeapToStack"; }
3637 
3638   /// See AbstractAttribute::getIdAddr()
3639   const char *getIdAddr() const override { return &ID; }
3640 
3641   /// This function should return true if the type of the \p AA is AAHeapToStack
3642   static bool classof(const AbstractAttribute *AA) {
3643     return (AA->getIdAddr() == &ID);
3644   }
3645 
3646   /// Unique ID (due to the unique address)
3647   static const char ID;
3648 };
3649 
3650 /// An abstract interface for privatizability.
3651 ///
3652 /// A pointer is privatizable if it can be replaced by a new, private one.
3653 /// Privatizing pointer reduces the use count, interaction between unrelated
3654 /// code parts.
3655 ///
3656 /// In order for a pointer to be privatizable its value cannot be observed
3657 /// (=nocapture), it is (for now) not written (=readonly & noalias), we know
3658 /// what values are necessary to make the private copy look like the original
3659 /// one, and the values we need can be loaded (=dereferenceable).
3660 struct AAPrivatizablePtr
3661     : public StateWrapper<BooleanState, AbstractAttribute> {
3662   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3663   AAPrivatizablePtr(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3664 
3665   /// Returns true if pointer privatization is assumed to be possible.
3666   bool isAssumedPrivatizablePtr() const { return getAssumed(); }
3667 
3668   /// Returns true if pointer privatization is known to be possible.
3669   bool isKnownPrivatizablePtr() const { return getKnown(); }
3670 
3671   /// Return the type we can choose for a private copy of the underlying
3672   /// value. None means it is not clear yet, nullptr means there is none.
3673   virtual Optional<Type *> getPrivatizableType() const = 0;
3674 
3675   /// Create an abstract attribute view for the position \p IRP.
3676   static AAPrivatizablePtr &createForPosition(const IRPosition &IRP,
3677                                               Attributor &A);
3678 
3679   /// See AbstractAttribute::getName()
3680   const std::string getName() const override { return "AAPrivatizablePtr"; }
3681 
3682   /// See AbstractAttribute::getIdAddr()
3683   const char *getIdAddr() const override { return &ID; }
3684 
3685   /// This function should return true if the type of the \p AA is
3686   /// AAPricatizablePtr
3687   static bool classof(const AbstractAttribute *AA) {
3688     return (AA->getIdAddr() == &ID);
3689   }
3690 
3691   /// Unique ID (due to the unique address)
3692   static const char ID;
3693 };
3694 
3695 /// An abstract interface for memory access kind related attributes
3696 /// (readnone/readonly/writeonly).
3697 struct AAMemoryBehavior
3698     : public IRAttribute<
3699           Attribute::ReadNone,
3700           StateWrapper<BitIntegerState<uint8_t, 3>, AbstractAttribute>> {
3701   AAMemoryBehavior(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3702 
3703   /// State encoding bits. A set bit in the state means the property holds.
3704   /// BEST_STATE is the best possible state, 0 the worst possible state.
3705   enum {
3706     NO_READS = 1 << 0,
3707     NO_WRITES = 1 << 1,
3708     NO_ACCESSES = NO_READS | NO_WRITES,
3709 
3710     BEST_STATE = NO_ACCESSES,
3711   };
3712   static_assert(BEST_STATE == getBestState(), "Unexpected BEST_STATE value");
3713 
3714   /// Return true if we know that the underlying value is not read or accessed
3715   /// in its respective scope.
3716   bool isKnownReadNone() const { return isKnown(NO_ACCESSES); }
3717 
3718   /// Return true if we assume that the underlying value is not read or accessed
3719   /// in its respective scope.
3720   bool isAssumedReadNone() const { return isAssumed(NO_ACCESSES); }
3721 
3722   /// Return true if we know that the underlying value is not accessed
3723   /// (=written) in its respective scope.
3724   bool isKnownReadOnly() const { return isKnown(NO_WRITES); }
3725 
3726   /// Return true if we assume that the underlying value is not accessed
3727   /// (=written) in its respective scope.
3728   bool isAssumedReadOnly() const { return isAssumed(NO_WRITES); }
3729 
3730   /// Return true if we know that the underlying value is not read in its
3731   /// respective scope.
3732   bool isKnownWriteOnly() const { return isKnown(NO_READS); }
3733 
3734   /// Return true if we assume that the underlying value is not read in its
3735   /// respective scope.
3736   bool isAssumedWriteOnly() const { return isAssumed(NO_READS); }
3737 
3738   /// Create an abstract attribute view for the position \p IRP.
3739   static AAMemoryBehavior &createForPosition(const IRPosition &IRP,
3740                                              Attributor &A);
3741 
3742   /// See AbstractAttribute::getName()
3743   const std::string getName() const override { return "AAMemoryBehavior"; }
3744 
3745   /// See AbstractAttribute::getIdAddr()
3746   const char *getIdAddr() const override { return &ID; }
3747 
3748   /// This function should return true if the type of the \p AA is
3749   /// AAMemoryBehavior
3750   static bool classof(const AbstractAttribute *AA) {
3751     return (AA->getIdAddr() == &ID);
3752   }
3753 
3754   /// Unique ID (due to the unique address)
3755   static const char ID;
3756 };
3757 
3758 /// An abstract interface for all memory location attributes
3759 /// (readnone/argmemonly/inaccessiblememonly/inaccessibleorargmemonly).
3760 struct AAMemoryLocation
3761     : public IRAttribute<
3762           Attribute::ReadNone,
3763           StateWrapper<BitIntegerState<uint32_t, 511>, AbstractAttribute>> {
3764   using MemoryLocationsKind = StateType::base_t;
3765 
3766   AAMemoryLocation(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3767 
3768   /// Encoding of different locations that could be accessed by a memory
3769   /// access.
3770   enum {
3771     ALL_LOCATIONS = 0,
3772     NO_LOCAL_MEM = 1 << 0,
3773     NO_CONST_MEM = 1 << 1,
3774     NO_GLOBAL_INTERNAL_MEM = 1 << 2,
3775     NO_GLOBAL_EXTERNAL_MEM = 1 << 3,
3776     NO_GLOBAL_MEM = NO_GLOBAL_INTERNAL_MEM | NO_GLOBAL_EXTERNAL_MEM,
3777     NO_ARGUMENT_MEM = 1 << 4,
3778     NO_INACCESSIBLE_MEM = 1 << 5,
3779     NO_MALLOCED_MEM = 1 << 6,
3780     NO_UNKOWN_MEM = 1 << 7,
3781     NO_LOCATIONS = NO_LOCAL_MEM | NO_CONST_MEM | NO_GLOBAL_INTERNAL_MEM |
3782                    NO_GLOBAL_EXTERNAL_MEM | NO_ARGUMENT_MEM |
3783                    NO_INACCESSIBLE_MEM | NO_MALLOCED_MEM | NO_UNKOWN_MEM,
3784 
3785     // Helper bit to track if we gave up or not.
3786     VALID_STATE = NO_LOCATIONS + 1,
3787 
3788     BEST_STATE = NO_LOCATIONS | VALID_STATE,
3789   };
3790   static_assert(BEST_STATE == getBestState(), "Unexpected BEST_STATE value");
3791 
3792   /// Return true if we know that the associated functions has no observable
3793   /// accesses.
3794   bool isKnownReadNone() const { return isKnown(NO_LOCATIONS); }
3795 
3796   /// Return true if we assume that the associated functions has no observable
3797   /// accesses.
3798   bool isAssumedReadNone() const {
3799     return isAssumed(NO_LOCATIONS) | isAssumedStackOnly();
3800   }
3801 
3802   /// Return true if we know that the associated functions has at most
3803   /// local/stack accesses.
3804   bool isKnowStackOnly() const {
3805     return isKnown(inverseLocation(NO_LOCAL_MEM, true, true));
3806   }
3807 
3808   /// Return true if we assume that the associated functions has at most
3809   /// local/stack accesses.
3810   bool isAssumedStackOnly() const {
3811     return isAssumed(inverseLocation(NO_LOCAL_MEM, true, true));
3812   }
3813 
3814   /// Return true if we know that the underlying value will only access
3815   /// inaccesible memory only (see Attribute::InaccessibleMemOnly).
3816   bool isKnownInaccessibleMemOnly() const {
3817     return isKnown(inverseLocation(NO_INACCESSIBLE_MEM, true, true));
3818   }
3819 
3820   /// Return true if we assume that the underlying value will only access
3821   /// inaccesible memory only (see Attribute::InaccessibleMemOnly).
3822   bool isAssumedInaccessibleMemOnly() const {
3823     return isAssumed(inverseLocation(NO_INACCESSIBLE_MEM, true, true));
3824   }
3825 
3826   /// Return true if we know that the underlying value will only access
3827   /// argument pointees (see Attribute::ArgMemOnly).
3828   bool isKnownArgMemOnly() const {
3829     return isKnown(inverseLocation(NO_ARGUMENT_MEM, true, true));
3830   }
3831 
3832   /// Return true if we assume that the underlying value will only access
3833   /// argument pointees (see Attribute::ArgMemOnly).
3834   bool isAssumedArgMemOnly() const {
3835     return isAssumed(inverseLocation(NO_ARGUMENT_MEM, true, true));
3836   }
3837 
3838   /// Return true if we know that the underlying value will only access
3839   /// inaccesible memory or argument pointees (see
3840   /// Attribute::InaccessibleOrArgMemOnly).
3841   bool isKnownInaccessibleOrArgMemOnly() const {
3842     return isKnown(
3843         inverseLocation(NO_INACCESSIBLE_MEM | NO_ARGUMENT_MEM, true, true));
3844   }
3845 
3846   /// Return true if we assume that the underlying value will only access
3847   /// inaccesible memory or argument pointees (see
3848   /// Attribute::InaccessibleOrArgMemOnly).
3849   bool isAssumedInaccessibleOrArgMemOnly() const {
3850     return isAssumed(
3851         inverseLocation(NO_INACCESSIBLE_MEM | NO_ARGUMENT_MEM, true, true));
3852   }
3853 
3854   /// Return true if the underlying value may access memory through arguement
3855   /// pointers of the associated function, if any.
3856   bool mayAccessArgMem() const { return !isAssumed(NO_ARGUMENT_MEM); }
3857 
3858   /// Return true if only the memory locations specififed by \p MLK are assumed
3859   /// to be accessed by the associated function.
3860   bool isAssumedSpecifiedMemOnly(MemoryLocationsKind MLK) const {
3861     return isAssumed(MLK);
3862   }
3863 
3864   /// Return the locations that are assumed to be not accessed by the associated
3865   /// function, if any.
3866   MemoryLocationsKind getAssumedNotAccessedLocation() const {
3867     return getAssumed();
3868   }
3869 
3870   /// Return the inverse of location \p Loc, thus for NO_XXX the return
3871   /// describes ONLY_XXX. The flags \p AndLocalMem and \p AndConstMem determine
3872   /// if local (=stack) and constant memory are allowed as well. Most of the
3873   /// time we do want them to be included, e.g., argmemonly allows accesses via
3874   /// argument pointers or local or constant memory accesses.
3875   static MemoryLocationsKind
3876   inverseLocation(MemoryLocationsKind Loc, bool AndLocalMem, bool AndConstMem) {
3877     return NO_LOCATIONS & ~(Loc | (AndLocalMem ? NO_LOCAL_MEM : 0) |
3878                             (AndConstMem ? NO_CONST_MEM : 0));
3879   };
3880 
3881   /// Return the locations encoded by \p MLK as a readable string.
3882   static std::string getMemoryLocationsAsStr(MemoryLocationsKind MLK);
3883 
3884   /// Simple enum to distinguish read/write/read-write accesses.
3885   enum AccessKind {
3886     NONE = 0,
3887     READ = 1 << 0,
3888     WRITE = 1 << 1,
3889     READ_WRITE = READ | WRITE,
3890   };
3891 
3892   /// Check \p Pred on all accesses to the memory kinds specified by \p MLK.
3893   ///
3894   /// This method will evaluate \p Pred on all accesses (access instruction +
3895   /// underlying accessed memory pointer) and it will return true if \p Pred
3896   /// holds every time.
3897   virtual bool checkForAllAccessesToMemoryKind(
3898       function_ref<bool(const Instruction *, const Value *, AccessKind,
3899                         MemoryLocationsKind)>
3900           Pred,
3901       MemoryLocationsKind MLK) const = 0;
3902 
3903   /// Create an abstract attribute view for the position \p IRP.
3904   static AAMemoryLocation &createForPosition(const IRPosition &IRP,
3905                                              Attributor &A);
3906 
3907   /// See AbstractState::getAsStr().
3908   const std::string getAsStr() const override {
3909     return getMemoryLocationsAsStr(getAssumedNotAccessedLocation());
3910   }
3911 
3912   /// See AbstractAttribute::getName()
3913   const std::string getName() const override { return "AAMemoryLocation"; }
3914 
3915   /// See AbstractAttribute::getIdAddr()
3916   const char *getIdAddr() const override { return &ID; }
3917 
3918   /// This function should return true if the type of the \p AA is
3919   /// AAMemoryLocation
3920   static bool classof(const AbstractAttribute *AA) {
3921     return (AA->getIdAddr() == &ID);
3922   }
3923 
3924   /// Unique ID (due to the unique address)
3925   static const char ID;
3926 };
3927 
3928 /// An abstract interface for range value analysis.
3929 struct AAValueConstantRange
3930     : public StateWrapper<IntegerRangeState, AbstractAttribute, uint32_t> {
3931   using Base = StateWrapper<IntegerRangeState, AbstractAttribute, uint32_t>;
3932   AAValueConstantRange(const IRPosition &IRP, Attributor &A)
3933       : Base(IRP, IRP.getAssociatedType()->getIntegerBitWidth()) {}
3934 
3935   /// See AbstractAttribute::getState(...).
3936   IntegerRangeState &getState() override { return *this; }
3937   const IntegerRangeState &getState() const override { return *this; }
3938 
3939   /// Create an abstract attribute view for the position \p IRP.
3940   static AAValueConstantRange &createForPosition(const IRPosition &IRP,
3941                                                  Attributor &A);
3942 
3943   /// Return an assumed range for the assocaited value a program point \p CtxI.
3944   /// If \p I is nullptr, simply return an assumed range.
3945   virtual ConstantRange
3946   getAssumedConstantRange(Attributor &A,
3947                           const Instruction *CtxI = nullptr) const = 0;
3948 
3949   /// Return a known range for the assocaited value at a program point \p CtxI.
3950   /// If \p I is nullptr, simply return a known range.
3951   virtual ConstantRange
3952   getKnownConstantRange(Attributor &A,
3953                         const Instruction *CtxI = nullptr) const = 0;
3954 
3955   /// Return an assumed constant for the assocaited value a program point \p
3956   /// CtxI.
3957   Optional<ConstantInt *>
3958   getAssumedConstantInt(Attributor &A,
3959                         const Instruction *CtxI = nullptr) const {
3960     ConstantRange RangeV = getAssumedConstantRange(A, CtxI);
3961     if (auto *C = RangeV.getSingleElement())
3962       return cast<ConstantInt>(
3963           ConstantInt::get(getAssociatedValue().getType(), *C));
3964     if (RangeV.isEmptySet())
3965       return llvm::None;
3966     return nullptr;
3967   }
3968 
3969   /// See AbstractAttribute::getName()
3970   const std::string getName() const override { return "AAValueConstantRange"; }
3971 
3972   /// See AbstractAttribute::getIdAddr()
3973   const char *getIdAddr() const override { return &ID; }
3974 
3975   /// This function should return true if the type of the \p AA is
3976   /// AAValueConstantRange
3977   static bool classof(const AbstractAttribute *AA) {
3978     return (AA->getIdAddr() == &ID);
3979   }
3980 
3981   /// Unique ID (due to the unique address)
3982   static const char ID;
3983 };
3984 
3985 /// A class for a set state.
3986 /// The assumed boolean state indicates whether the corresponding set is full
3987 /// set or not. If the assumed state is false, this is the worst state. The
3988 /// worst state (invalid state) of set of potential values is when the set
3989 /// contains every possible value (i.e. we cannot in any way limit the value
3990 /// that the target position can take). That never happens naturally, we only
3991 /// force it. As for the conditions under which we force it, see
3992 /// AAPotentialValues.
3993 template <typename MemberTy, typename KeyInfo = DenseMapInfo<MemberTy>>
3994 struct PotentialValuesState : AbstractState {
3995   using SetTy = DenseSet<MemberTy, KeyInfo>;
3996 
3997   PotentialValuesState() : IsValidState(true), UndefIsContained(false) {}
3998 
3999   PotentialValuesState(bool IsValid)
4000       : IsValidState(IsValid), UndefIsContained(false) {}
4001 
4002   /// See AbstractState::isValidState(...)
4003   bool isValidState() const override { return IsValidState.isValidState(); }
4004 
4005   /// See AbstractState::isAtFixpoint(...)
4006   bool isAtFixpoint() const override { return IsValidState.isAtFixpoint(); }
4007 
4008   /// See AbstractState::indicatePessimisticFixpoint(...)
4009   ChangeStatus indicatePessimisticFixpoint() override {
4010     return IsValidState.indicatePessimisticFixpoint();
4011   }
4012 
4013   /// See AbstractState::indicateOptimisticFixpoint(...)
4014   ChangeStatus indicateOptimisticFixpoint() override {
4015     return IsValidState.indicateOptimisticFixpoint();
4016   }
4017 
4018   /// Return the assumed state
4019   PotentialValuesState &getAssumed() { return *this; }
4020   const PotentialValuesState &getAssumed() const { return *this; }
4021 
4022   /// Return this set. We should check whether this set is valid or not by
4023   /// isValidState() before calling this function.
4024   const SetTy &getAssumedSet() const {
4025     assert(isValidState() && "This set shoud not be used when it is invalid!");
4026     return Set;
4027   }
4028 
4029   /// Returns whether this state contains an undef value or not.
4030   bool undefIsContained() const {
4031     assert(isValidState() && "This flag shoud not be used when it is invalid!");
4032     return UndefIsContained;
4033   }
4034 
4035   bool operator==(const PotentialValuesState &RHS) const {
4036     if (isValidState() != RHS.isValidState())
4037       return false;
4038     if (!isValidState() && !RHS.isValidState())
4039       return true;
4040     if (undefIsContained() != RHS.undefIsContained())
4041       return false;
4042     return Set == RHS.getAssumedSet();
4043   }
4044 
4045   /// Maximum number of potential values to be tracked.
4046   /// This is set by -attributor-max-potential-values command line option
4047   static unsigned MaxPotentialValues;
4048 
4049   /// Return empty set as the best state of potential values.
4050   static PotentialValuesState getBestState() {
4051     return PotentialValuesState(true);
4052   }
4053 
4054   static PotentialValuesState getBestState(PotentialValuesState &PVS) {
4055     return getBestState();
4056   }
4057 
4058   /// Return full set as the worst state of potential values.
4059   static PotentialValuesState getWorstState() {
4060     return PotentialValuesState(false);
4061   }
4062 
4063   /// Union assumed set with the passed value.
4064   void unionAssumed(const MemberTy &C) { insert(C); }
4065 
4066   /// Union assumed set with assumed set of the passed state \p PVS.
4067   void unionAssumed(const PotentialValuesState &PVS) { unionWith(PVS); }
4068 
4069   /// Union assumed set with an undef value.
4070   void unionAssumedWithUndef() { unionWithUndef(); }
4071 
4072   /// "Clamp" this state with \p PVS.
4073   PotentialValuesState operator^=(const PotentialValuesState &PVS) {
4074     IsValidState ^= PVS.IsValidState;
4075     unionAssumed(PVS);
4076     return *this;
4077   }
4078 
4079   PotentialValuesState operator&=(const PotentialValuesState &PVS) {
4080     IsValidState &= PVS.IsValidState;
4081     unionAssumed(PVS);
4082     return *this;
4083   }
4084 
4085 private:
4086   /// Check the size of this set, and invalidate when the size is no
4087   /// less than \p MaxPotentialValues threshold.
4088   void checkAndInvalidate() {
4089     if (Set.size() >= MaxPotentialValues)
4090       indicatePessimisticFixpoint();
4091     else
4092       reduceUndefValue();
4093   }
4094 
4095   /// If this state contains both undef and not undef, we can reduce
4096   /// undef to the not undef value.
4097   void reduceUndefValue() { UndefIsContained = UndefIsContained & Set.empty(); }
4098 
4099   /// Insert an element into this set.
4100   void insert(const MemberTy &C) {
4101     if (!isValidState())
4102       return;
4103     Set.insert(C);
4104     checkAndInvalidate();
4105   }
4106 
4107   /// Take union with R.
4108   void unionWith(const PotentialValuesState &R) {
4109     /// If this is a full set, do nothing.
4110     if (!isValidState())
4111       return;
4112     /// If R is full set, change L to a full set.
4113     if (!R.isValidState()) {
4114       indicatePessimisticFixpoint();
4115       return;
4116     }
4117     for (const MemberTy &C : R.Set)
4118       Set.insert(C);
4119     UndefIsContained |= R.undefIsContained();
4120     checkAndInvalidate();
4121   }
4122 
4123   /// Take union with an undef value.
4124   void unionWithUndef() {
4125     UndefIsContained = true;
4126     reduceUndefValue();
4127   }
4128 
4129   /// Take intersection with R.
4130   void intersectWith(const PotentialValuesState &R) {
4131     /// If R is a full set, do nothing.
4132     if (!R.isValidState())
4133       return;
4134     /// If this is a full set, change this to R.
4135     if (!isValidState()) {
4136       *this = R;
4137       return;
4138     }
4139     SetTy IntersectSet;
4140     for (const MemberTy &C : Set) {
4141       if (R.Set.count(C))
4142         IntersectSet.insert(C);
4143     }
4144     Set = IntersectSet;
4145     UndefIsContained &= R.undefIsContained();
4146     reduceUndefValue();
4147   }
4148 
4149   /// A helper state which indicate whether this state is valid or not.
4150   BooleanState IsValidState;
4151 
4152   /// Container for potential values
4153   SetTy Set;
4154 
4155   /// Flag for undef value
4156   bool UndefIsContained;
4157 };
4158 
4159 using PotentialConstantIntValuesState = PotentialValuesState<APInt>;
4160 
4161 raw_ostream &operator<<(raw_ostream &OS,
4162                         const PotentialConstantIntValuesState &R);
4163 
4164 /// An abstract interface for potential values analysis.
4165 ///
4166 /// This AA collects potential values for each IR position.
4167 /// An assumed set of potential values is initialized with the empty set (the
4168 /// best state) and it will grow monotonically as we find more potential values
4169 /// for this position.
4170 /// The set might be forced to the worst state, that is, to contain every
4171 /// possible value for this position in 2 cases.
4172 ///   1. We surpassed the \p MaxPotentialValues threshold. This includes the
4173 ///      case that this position is affected (e.g. because of an operation) by a
4174 ///      Value that is in the worst state.
4175 ///   2. We tried to initialize on a Value that we cannot handle (e.g. an
4176 ///      operator we do not currently handle).
4177 ///
4178 /// TODO: Support values other than constant integers.
4179 struct AAPotentialValues
4180     : public StateWrapper<PotentialConstantIntValuesState, AbstractAttribute> {
4181   using Base = StateWrapper<PotentialConstantIntValuesState, AbstractAttribute>;
4182   AAPotentialValues(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4183 
4184   /// See AbstractAttribute::getState(...).
4185   PotentialConstantIntValuesState &getState() override { return *this; }
4186   const PotentialConstantIntValuesState &getState() const override {
4187     return *this;
4188   }
4189 
4190   /// Create an abstract attribute view for the position \p IRP.
4191   static AAPotentialValues &createForPosition(const IRPosition &IRP,
4192                                               Attributor &A);
4193 
4194   /// Return assumed constant for the associated value
4195   Optional<ConstantInt *>
4196   getAssumedConstantInt(Attributor &A,
4197                         const Instruction *CtxI = nullptr) const {
4198     if (!isValidState())
4199       return nullptr;
4200     if (getAssumedSet().size() == 1)
4201       return cast<ConstantInt>(ConstantInt::get(getAssociatedValue().getType(),
4202                                                 *(getAssumedSet().begin())));
4203     if (getAssumedSet().size() == 0) {
4204       if (undefIsContained())
4205         return cast<ConstantInt>(
4206             ConstantInt::get(getAssociatedValue().getType(), 0));
4207       return llvm::None;
4208     }
4209 
4210     return nullptr;
4211   }
4212 
4213   /// See AbstractAttribute::getName()
4214   const std::string getName() const override { return "AAPotentialValues"; }
4215 
4216   /// See AbstractAttribute::getIdAddr()
4217   const char *getIdAddr() const override { return &ID; }
4218 
4219   /// This function should return true if the type of the \p AA is
4220   /// AAPotentialValues
4221   static bool classof(const AbstractAttribute *AA) {
4222     return (AA->getIdAddr() == &ID);
4223   }
4224 
4225   /// Unique ID (due to the unique address)
4226   static const char ID;
4227 };
4228 
4229 /// An abstract interface for all noundef attributes.
4230 struct AANoUndef
4231     : public IRAttribute<Attribute::NoUndef,
4232                          StateWrapper<BooleanState, AbstractAttribute>> {
4233   AANoUndef(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
4234 
4235   /// Return true if we assume that the underlying value is noundef.
4236   bool isAssumedNoUndef() const { return getAssumed(); }
4237 
4238   /// Return true if we know that underlying value is noundef.
4239   bool isKnownNoUndef() const { return getKnown(); }
4240 
4241   /// Create an abstract attribute view for the position \p IRP.
4242   static AANoUndef &createForPosition(const IRPosition &IRP, Attributor &A);
4243 
4244   /// See AbstractAttribute::getName()
4245   const std::string getName() const override { return "AANoUndef"; }
4246 
4247   /// See AbstractAttribute::getIdAddr()
4248   const char *getIdAddr() const override { return &ID; }
4249 
4250   /// This function should return true if the type of the \p AA is AANoUndef
4251   static bool classof(const AbstractAttribute *AA) {
4252     return (AA->getIdAddr() == &ID);
4253   }
4254 
4255   /// Unique ID (due to the unique address)
4256   static const char ID;
4257 };
4258 
4259 struct AACallGraphNode;
4260 struct AACallEdges;
4261 
4262 /// An Iterator for call edges, creates AACallEdges attributes in a lazy way.
4263 /// This iterator becomes invalid if the underlying edge list changes.
4264 /// So This shouldn't outlive a iteration of Attributor.
4265 class AACallEdgeIterator
4266     : public iterator_adaptor_base<AACallEdgeIterator,
4267                                    SetVector<Function *>::iterator> {
4268   AACallEdgeIterator(Attributor &A, SetVector<Function *>::iterator Begin)
4269       : iterator_adaptor_base(Begin), A(A) {}
4270 
4271 public:
4272   AACallGraphNode *operator*() const;
4273 
4274 private:
4275   Attributor &A;
4276   friend AACallEdges;
4277   friend AttributorCallGraph;
4278 };
4279 
4280 struct AACallGraphNode {
4281   AACallGraphNode(Attributor &A) : A(A) {}
4282   virtual ~AACallGraphNode() {}
4283 
4284   virtual AACallEdgeIterator optimisticEdgesBegin() const = 0;
4285   virtual AACallEdgeIterator optimisticEdgesEnd() const = 0;
4286 
4287   /// Iterator range for exploring the call graph.
4288   iterator_range<AACallEdgeIterator> optimisticEdgesRange() const {
4289     return iterator_range<AACallEdgeIterator>(optimisticEdgesBegin(),
4290                                               optimisticEdgesEnd());
4291   }
4292 
4293 protected:
4294   /// Reference to Attributor needed for GraphTraits implementation.
4295   Attributor &A;
4296 };
4297 
4298 /// An abstract state for querying live call edges.
4299 /// This interface uses the Attributor's optimistic liveness
4300 /// information to compute the edges that are alive.
4301 struct AACallEdges : public StateWrapper<BooleanState, AbstractAttribute>,
4302                      AACallGraphNode {
4303   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4304 
4305   AACallEdges(const IRPosition &IRP, Attributor &A)
4306       : Base(IRP), AACallGraphNode(A) {}
4307 
4308   /// Get the optimistic edges.
4309   virtual const SetVector<Function *> &getOptimisticEdges() const = 0;
4310 
4311   /// Is there any call with a unknown callee.
4312   virtual bool hasUnknownCallee() const = 0;
4313 
4314   /// Is there any call with a unknown callee, excluding any inline asm.
4315   virtual bool hasNonAsmUnknownCallee() const = 0;
4316 
4317   /// Iterator for exploring the call graph.
4318   AACallEdgeIterator optimisticEdgesBegin() const override {
4319     return AACallEdgeIterator(A, getOptimisticEdges().begin());
4320   }
4321 
4322   /// Iterator for exploring the call graph.
4323   AACallEdgeIterator optimisticEdgesEnd() const override {
4324     return AACallEdgeIterator(A, getOptimisticEdges().end());
4325   }
4326 
4327   /// Create an abstract attribute view for the position \p IRP.
4328   static AACallEdges &createForPosition(const IRPosition &IRP, Attributor &A);
4329 
4330   /// See AbstractAttribute::getName()
4331   const std::string getName() const override { return "AACallEdges"; }
4332 
4333   /// See AbstractAttribute::getIdAddr()
4334   const char *getIdAddr() const override { return &ID; }
4335 
4336   /// This function should return true if the type of the \p AA is AACallEdges.
4337   static bool classof(const AbstractAttribute *AA) {
4338     return (AA->getIdAddr() == &ID);
4339   }
4340 
4341   /// Unique ID (due to the unique address)
4342   static const char ID;
4343 };
4344 
4345 // Synthetic root node for the Attributor's internal call graph.
4346 struct AttributorCallGraph : public AACallGraphNode {
4347   AttributorCallGraph(Attributor &A) : AACallGraphNode(A) {}
4348   virtual ~AttributorCallGraph() {}
4349 
4350   AACallEdgeIterator optimisticEdgesBegin() const override {
4351     return AACallEdgeIterator(A, A.Functions.begin());
4352   }
4353 
4354   AACallEdgeIterator optimisticEdgesEnd() const override {
4355     return AACallEdgeIterator(A, A.Functions.end());
4356   }
4357 
4358   /// Force populate the entire call graph.
4359   void populateAll() const {
4360     for (const AACallGraphNode *AA : optimisticEdgesRange()) {
4361       // Nothing else to do here.
4362       (void)AA;
4363     }
4364   }
4365 
4366   void print();
4367 };
4368 
4369 template <> struct GraphTraits<AACallGraphNode *> {
4370   using NodeRef = AACallGraphNode *;
4371   using ChildIteratorType = AACallEdgeIterator;
4372 
4373   static AACallEdgeIterator child_begin(AACallGraphNode *Node) {
4374     return Node->optimisticEdgesBegin();
4375   }
4376 
4377   static AACallEdgeIterator child_end(AACallGraphNode *Node) {
4378     return Node->optimisticEdgesEnd();
4379   }
4380 };
4381 
4382 template <>
4383 struct GraphTraits<AttributorCallGraph *>
4384     : public GraphTraits<AACallGraphNode *> {
4385   using nodes_iterator = AACallEdgeIterator;
4386 
4387   static AACallGraphNode *getEntryNode(AttributorCallGraph *G) {
4388     return static_cast<AACallGraphNode *>(G);
4389   }
4390 
4391   static AACallEdgeIterator nodes_begin(const AttributorCallGraph *G) {
4392     return G->optimisticEdgesBegin();
4393   }
4394 
4395   static AACallEdgeIterator nodes_end(const AttributorCallGraph *G) {
4396     return G->optimisticEdgesEnd();
4397   }
4398 };
4399 
4400 template <>
4401 struct DOTGraphTraits<AttributorCallGraph *> : public DefaultDOTGraphTraits {
4402   DOTGraphTraits(bool Simple = false) : DefaultDOTGraphTraits(Simple) {}
4403 
4404   std::string getNodeLabel(const AACallGraphNode *Node,
4405                            const AttributorCallGraph *Graph) {
4406     const AACallEdges *AACE = static_cast<const AACallEdges *>(Node);
4407     return AACE->getAssociatedFunction()->getName().str();
4408   }
4409 
4410   static bool isNodeHidden(const AACallGraphNode *Node,
4411                            const AttributorCallGraph *Graph) {
4412     // Hide the synth root.
4413     return static_cast<const AACallGraphNode *>(Graph) == Node;
4414   }
4415 };
4416 
4417 struct AAExecutionDomain
4418     : public StateWrapper<BooleanState, AbstractAttribute> {
4419   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4420   AAExecutionDomain(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4421 
4422   /// Create an abstract attribute view for the position \p IRP.
4423   static AAExecutionDomain &createForPosition(const IRPosition &IRP,
4424                                               Attributor &A);
4425 
4426   /// See AbstractAttribute::getName().
4427   const std::string getName() const override { return "AAExecutionDomain"; }
4428 
4429   /// See AbstractAttribute::getIdAddr().
4430   const char *getIdAddr() const override { return &ID; }
4431 
4432   /// Check if an instruction is executed only by the initial thread.
4433   virtual bool isExecutedByInitialThreadOnly(const Instruction &) const = 0;
4434 
4435   /// Check if a basic block is executed only by the initial thread.
4436   virtual bool isExecutedByInitialThreadOnly(const BasicBlock &) const = 0;
4437 
4438   /// This function should return true if the type of the \p AA is
4439   /// AAExecutionDomain.
4440   static bool classof(const AbstractAttribute *AA) {
4441     return (AA->getIdAddr() == &ID);
4442   }
4443 
4444   /// Unique ID (due to the unique address)
4445   static const char ID;
4446 };
4447 
4448 /// An abstract Attribute for computing reachability between functions.
4449 struct AAFunctionReachability
4450     : public StateWrapper<BooleanState, AbstractAttribute> {
4451   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4452 
4453   AAFunctionReachability(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4454 
4455   /// If the function represented by this possition can reach \p Fn.
4456   virtual bool canReach(Attributor &A, Function *Fn) const = 0;
4457 
4458   /// Create an abstract attribute view for the position \p IRP.
4459   static AAFunctionReachability &createForPosition(const IRPosition &IRP,
4460                                                    Attributor &A);
4461 
4462   /// See AbstractAttribute::getName()
4463   const std::string getName() const override { return "AAFuncitonReacability"; }
4464 
4465   /// See AbstractAttribute::getIdAddr()
4466   const char *getIdAddr() const override { return &ID; }
4467 
4468   /// This function should return true if the type of the \p AA is AACallEdges.
4469   static bool classof(const AbstractAttribute *AA) {
4470     return (AA->getIdAddr() == &ID);
4471   }
4472 
4473   /// Unique ID (due to the unique address)
4474   static const char ID;
4475 
4476 private:
4477   /// Can this function reach a call with unknown calee.
4478   virtual bool canReachUnknownCallee() const = 0;
4479 };
4480 
4481 /// An abstract interface for struct information.
4482 struct AAPointerInfo : public AbstractAttribute {
4483   AAPointerInfo(const IRPosition &IRP) : AbstractAttribute(IRP) {}
4484 
4485   enum AccessKind {
4486     AK_READ = 1 << 0,
4487     AK_WRITE = 1 << 1,
4488     AK_READ_WRITE = AK_READ | AK_WRITE,
4489   };
4490 
4491   /// An access description.
4492   struct Access {
4493     Access(Instruction *I, Optional<Value *> Content, AccessKind Kind, Type *Ty)
4494         : LocalI(I), RemoteI(I), Content(Content), Kind(Kind), Ty(Ty) {}
4495     Access(Instruction *LocalI, Instruction *RemoteI, Optional<Value *> Content,
4496            AccessKind Kind, Type *Ty)
4497         : LocalI(LocalI), RemoteI(RemoteI), Content(Content), Kind(Kind),
4498           Ty(Ty) {}
4499     Access(const Access &Other)
4500         : LocalI(Other.LocalI), RemoteI(Other.RemoteI), Content(Other.Content),
4501           Kind(Other.Kind), Ty(Other.Ty) {}
4502     Access(const Access &&Other)
4503         : LocalI(Other.LocalI), RemoteI(Other.RemoteI), Content(Other.Content),
4504           Kind(Other.Kind), Ty(Other.Ty) {}
4505 
4506     Access &operator=(const Access &Other) {
4507       LocalI = Other.LocalI;
4508       RemoteI = Other.RemoteI;
4509       Content = Other.Content;
4510       Kind = Other.Kind;
4511       Ty = Other.Ty;
4512       return *this;
4513     }
4514     bool operator==(const Access &R) const {
4515       return LocalI == R.LocalI && RemoteI == R.RemoteI &&
4516              Content == R.Content && Kind == R.Kind;
4517     }
4518     bool operator!=(const Access &R) const { return !(*this == R); }
4519 
4520     Access &operator&=(const Access &R) {
4521       assert(RemoteI == R.RemoteI && "Expected same instruction!");
4522       Content =
4523           AA::combineOptionalValuesInAAValueLatice(Content, R.Content, Ty);
4524       Kind = AccessKind(Kind | R.Kind);
4525       return *this;
4526     }
4527 
4528     /// Return the access kind.
4529     AccessKind getKind() const { return Kind; }
4530 
4531     /// Return true if this is a read access.
4532     bool isRead() const { return Kind & AK_READ; }
4533 
4534     /// Return true if this is a write access.
4535     bool isWrite() const { return Kind & AK_WRITE; }
4536 
4537     /// Return the instruction that causes the access with respect to the local
4538     /// scope of the associated attribute.
4539     Instruction *getLocalInst() const { return LocalI; }
4540 
4541     /// Return the actual instruction that causes the access.
4542     Instruction *getRemoteInst() const { return RemoteI; }
4543 
4544     /// Return true if the value written is not known yet.
4545     bool isWrittenValueYetUndetermined() const { return !Content.hasValue(); }
4546 
4547     /// Return true if the value written cannot be determined at all.
4548     bool isWrittenValueUnknown() const {
4549       return Content.hasValue() && !*Content;
4550     }
4551 
4552     /// Return the type associated with the access, if known.
4553     Type *getType() const { return Ty; }
4554 
4555     /// Return the value writen, if any. As long as
4556     /// isWrittenValueYetUndetermined return true this function shall not be
4557     /// called.
4558     Value *getWrittenValue() const { return *Content; }
4559 
4560     /// Return the written value which can be `llvm::null` if it is not yet
4561     /// determined.
4562     Optional<Value *> getContent() const { return Content; }
4563 
4564   private:
4565     /// The instruction responsible for the access with respect to the local
4566     /// scope of the associated attribute.
4567     Instruction *LocalI;
4568 
4569     /// The instruction responsible for the access.
4570     Instruction *RemoteI;
4571 
4572     /// The value written, if any. `llvm::none` means "not known yet", `nullptr`
4573     /// cannot be determined.
4574     Optional<Value *> Content;
4575 
4576     /// The access kind, e.g., READ, as bitset (could be more than one).
4577     AccessKind Kind;
4578 
4579     /// The type of the content, thus the type read/written, can be null if not
4580     /// available.
4581     Type *Ty;
4582   };
4583 
4584   /// Create an abstract attribute view for the position \p IRP.
4585   static AAPointerInfo &createForPosition(const IRPosition &IRP, Attributor &A);
4586 
4587   /// See AbstractAttribute::getName()
4588   const std::string getName() const override { return "AAPointerInfo"; }
4589 
4590   /// See AbstractAttribute::getIdAddr()
4591   const char *getIdAddr() const override { return &ID; }
4592 
4593   /// Call \p CB on all accesses that might interfere with \p LI and return true
4594   /// if all such accesses were known and the callback returned true for all of
4595   /// them, false otherwise.
4596   virtual bool forallInterferingAccesses(
4597       LoadInst &LI, function_ref<bool(const Access &, bool)> CB) const = 0;
4598   virtual bool forallInterferingAccesses(
4599       StoreInst &SI, function_ref<bool(const Access &, bool)> CB) const = 0;
4600 
4601   /// This function should return true if the type of the \p AA is AAPointerInfo
4602   static bool classof(const AbstractAttribute *AA) {
4603     return (AA->getIdAddr() == &ID);
4604   }
4605 
4606   /// Unique ID (due to the unique address)
4607   static const char ID;
4608 };
4609 
4610 raw_ostream &operator<<(raw_ostream &, const AAPointerInfo::Access &);
4611 
4612 /// Run options, used by the pass manager.
4613 enum AttributorRunOption {
4614   NONE = 0,
4615   MODULE = 1 << 0,
4616   CGSCC = 1 << 1,
4617   ALL = MODULE | CGSCC
4618 };
4619 
4620 } // end namespace llvm
4621 
4622 #endif // LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
4623