1 //===- Attributor.h --- Module-wide attribute deduction ---------*- C++ -*-===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // Attributor: An inter procedural (abstract) "attribute" deduction framework.
10 //
11 // The Attributor framework is an inter procedural abstract analysis (fixpoint
12 // iteration analysis). The goal is to allow easy deduction of new attributes as
13 // well as information exchange between abstract attributes in-flight.
14 //
15 // The Attributor class is the driver and the link between the various abstract
16 // attributes. The Attributor will iterate until a fixpoint state is reached by
17 // all abstract attributes in-flight, or until it will enforce a pessimistic fix
18 // point because an iteration limit is reached.
19 //
20 // Abstract attributes, derived from the AbstractAttribute class, actually
21 // describe properties of the code. They can correspond to actual LLVM-IR
22 // attributes, or they can be more general, ultimately unrelated to LLVM-IR
23 // attributes. The latter is useful when an abstract attributes provides
24 // information to other abstract attributes in-flight but we might not want to
25 // manifest the information. The Attributor allows to query in-flight abstract
26 // attributes through the `Attributor::getAAFor` method (see the method
27 // description for an example). If the method is used by an abstract attribute
28 // P, and it results in an abstract attribute Q, the Attributor will
29 // automatically capture a potential dependence from Q to P. This dependence
30 // will cause P to be reevaluated whenever Q changes in the future.
31 //
32 // The Attributor will only reevaluate abstract attributes that might have
33 // changed since the last iteration. That means that the Attribute will not
34 // revisit all instructions/blocks/functions in the module but only query
35 // an update from a subset of the abstract attributes.
36 //
37 // The update method `AbstractAttribute::updateImpl` is implemented by the
38 // specific "abstract attribute" subclasses. The method is invoked whenever the
39 // currently assumed state (see the AbstractState class) might not be valid
40 // anymore. This can, for example, happen if the state was dependent on another
41 // abstract attribute that changed. In every invocation, the update method has
42 // to adjust the internal state of an abstract attribute to a point that is
43 // justifiable by the underlying IR and the current state of abstract attributes
44 // in-flight. Since the IR is given and assumed to be valid, the information
45 // derived from it can be assumed to hold. However, information derived from
46 // other abstract attributes is conditional on various things. If the justifying
47 // state changed, the `updateImpl` has to revisit the situation and potentially
48 // find another justification or limit the optimistic assumes made.
49 //
50 // Change is the key in this framework. Until a state of no-change, thus a
51 // fixpoint, is reached, the Attributor will query the abstract attributes
52 // in-flight to re-evaluate their state. If the (current) state is too
53 // optimistic, hence it cannot be justified anymore through other abstract
54 // attributes or the state of the IR, the state of the abstract attribute will
55 // have to change. Generally, we assume abstract attribute state to be a finite
56 // height lattice and the update function to be monotone. However, these
57 // conditions are not enforced because the iteration limit will guarantee
58 // termination. If an optimistic fixpoint is reached, or a pessimistic fix
59 // point is enforced after a timeout, the abstract attributes are tasked to
60 // manifest their result in the IR for passes to come.
61 //
62 // Attribute manifestation is not mandatory. If desired, there is support to
63 // generate a single or multiple LLVM-IR attributes already in the helper struct
64 // IRAttribute. In the simplest case, a subclass inherits from IRAttribute with
65 // a proper Attribute::AttrKind as template parameter. The Attributor
66 // manifestation framework will then create and place a new attribute if it is
67 // allowed to do so (based on the abstract state). Other use cases can be
68 // achieved by overloading AbstractAttribute or IRAttribute methods.
69 //
70 //
71 // The "mechanics" of adding a new "abstract attribute":
72 // - Define a class (transitively) inheriting from AbstractAttribute and one
73 //   (which could be the same) that (transitively) inherits from AbstractState.
74 //   For the latter, consider the already available BooleanState and
75 //   {Inc,Dec,Bit}IntegerState if they fit your needs, e.g., you require only a
76 //   number tracking or bit-encoding.
77 // - Implement all pure methods. Also use overloading if the attribute is not
78 //   conforming with the "default" behavior: A (set of) LLVM-IR attribute(s) for
79 //   an argument, call site argument, function return value, or function. See
80 //   the class and method descriptions for more information on the two
81 //   "Abstract" classes and their respective methods.
82 // - Register opportunities for the new abstract attribute in the
83 //   `Attributor::identifyDefaultAbstractAttributes` method if it should be
84 //   counted as a 'default' attribute.
85 // - Add sufficient tests.
86 // - Add a Statistics object for bookkeeping. If it is a simple (set of)
87 //   attribute(s) manifested through the Attributor manifestation framework, see
88 //   the bookkeeping function in Attributor.cpp.
89 // - If instructions with a certain opcode are interesting to the attribute, add
90 //   that opcode to the switch in `Attributor::identifyAbstractAttributes`. This
91 //   will make it possible to query all those instructions through the
92 //   `InformationCache::getOpcodeInstMapForFunction` interface and eliminate the
93 //   need to traverse the IR repeatedly.
94 //
95 //===----------------------------------------------------------------------===//
96 
97 #ifndef LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
98 #define LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
99 
100 #include "llvm/ADT/DenseSet.h"
101 #include "llvm/ADT/GraphTraits.h"
102 #include "llvm/ADT/MapVector.h"
103 #include "llvm/ADT/STLExtras.h"
104 #include "llvm/ADT/SetOperations.h"
105 #include "llvm/ADT/SetVector.h"
106 #include "llvm/ADT/Triple.h"
107 #include "llvm/ADT/iterator.h"
108 #include "llvm/Analysis/AssumeBundleQueries.h"
109 #include "llvm/Analysis/CFG.h"
110 #include "llvm/Analysis/CGSCCPassManager.h"
111 #include "llvm/Analysis/LazyCallGraph.h"
112 #include "llvm/Analysis/LoopInfo.h"
113 #include "llvm/Analysis/MustExecute.h"
114 #include "llvm/Analysis/OptimizationRemarkEmitter.h"
115 #include "llvm/Analysis/PostDominators.h"
116 #include "llvm/Analysis/TargetLibraryInfo.h"
117 #include "llvm/IR/AbstractCallSite.h"
118 #include "llvm/IR/ConstantRange.h"
119 #include "llvm/IR/Constants.h"
120 #include "llvm/IR/InstIterator.h"
121 #include "llvm/IR/Instruction.h"
122 #include "llvm/IR/PassManager.h"
123 #include "llvm/IR/Value.h"
124 #include "llvm/Support/Alignment.h"
125 #include "llvm/Support/Allocator.h"
126 #include "llvm/Support/Casting.h"
127 #include "llvm/Support/DOTGraphTraits.h"
128 #include "llvm/Support/TimeProfiler.h"
129 #include "llvm/Transforms/Utils/CallGraphUpdater.h"
130 
131 #include <map>
132 
133 namespace llvm {
134 
135 class DataLayout;
136 class LLVMContext;
137 class Pass;
138 template <typename Fn> class function_ref;
139 struct AADepGraphNode;
140 struct AADepGraph;
141 struct Attributor;
142 struct AbstractAttribute;
143 struct InformationCache;
144 struct AAIsDead;
145 struct AttributorCallGraph;
146 struct IRPosition;
147 
148 class AAResults;
149 class Function;
150 
151 /// Abstract Attribute helper functions.
152 namespace AA {
153 
154 /// Flags to distinguish intra-procedural queries from *potentially*
155 /// inter-procedural queries. Not that information can be valid for both and
156 /// therefore both bits might be set.
157 enum ValueScope : uint8_t {
158   Intraprocedural = 1,
159   Interprocedural = 2,
160   AnyScope = Intraprocedural | Interprocedural,
161 };
162 
163 struct ValueAndContext : public std::pair<Value *, const Instruction *> {
164   using Base = std::pair<Value *, const Instruction *>;
165   ValueAndContext(const Base &B) : Base(B) {}
166   ValueAndContext(Value &V, const Instruction *CtxI) : Base(&V, CtxI) {}
167   ValueAndContext(Value &V, const Instruction &CtxI) : Base(&V, &CtxI) {}
168 
169   Value *getValue() const { return this->first; }
170   const Instruction *getCtxI() const { return this->second; }
171 };
172 
173 /// Return true if \p I is a `nosync` instruction. Use generic reasoning and
174 /// potentially the corresponding AANoSync.
175 bool isNoSyncInst(Attributor &A, const Instruction &I,
176                   const AbstractAttribute &QueryingAA);
177 
178 /// Return true if \p V is dynamically unique, that is, there are no two
179 /// "instances" of \p V at runtime with different values.
180 /// Note: If \p ForAnalysisOnly is set we only check that the Attributor will
181 /// never use \p V to represent two "instances" not that \p V could not
182 /// technically represent them.
183 bool isDynamicallyUnique(Attributor &A, const AbstractAttribute &QueryingAA,
184                          const Value &V, bool ForAnalysisOnly = true);
185 
186 /// Return true if \p V is a valid value in \p Scope, that is a constant or an
187 /// instruction/argument of \p Scope.
188 bool isValidInScope(const Value &V, const Function *Scope);
189 
190 /// Return true if the value of \p VAC is a valid at the position of \p VAC,
191 /// that is a constant, an argument of the same function, or an instruction in
192 /// that function that dominates the position.
193 bool isValidAtPosition(const ValueAndContext &VAC, InformationCache &InfoCache);
194 
195 /// Try to convert \p V to type \p Ty without introducing new instructions. If
196 /// this is not possible return `nullptr`. Note: this function basically knows
197 /// how to cast various constants.
198 Value *getWithType(Value &V, Type &Ty);
199 
200 /// Return the combination of \p A and \p B such that the result is a possible
201 /// value of both. \p B is potentially casted to match the type \p Ty or the
202 /// type of \p A if \p Ty is null.
203 ///
204 /// Examples:
205 ///        X + none  => X
206 /// not_none + undef => not_none
207 ///          V1 + V2 => nullptr
208 Optional<Value *>
209 combineOptionalValuesInAAValueLatice(const Optional<Value *> &A,
210                                      const Optional<Value *> &B, Type *Ty);
211 
212 /// Return the initial value of \p Obj with type \p Ty if that is a constant.
213 Constant *getInitialValueForObj(Value &Obj, Type &Ty,
214                                 const TargetLibraryInfo *TLI);
215 
216 /// Collect all potential underlying objects of \p Ptr at position \p CtxI in
217 /// \p Objects. Assumed information is used and dependences onto \p QueryingAA
218 /// are added appropriately.
219 ///
220 /// \returns True if \p Objects contains all assumed underlying objects, and
221 ///          false if something went wrong and the objects could not be
222 ///          determined.
223 bool getAssumedUnderlyingObjects(
224     Attributor &A, const Value &Ptr, SmallSetVector<Value *, 8> &Objects,
225     const AbstractAttribute &QueryingAA, const Instruction *CtxI,
226     bool &UsedAssumedInformation, AA::ValueScope VS = AA::Interprocedural,
227     SmallPtrSetImpl<Value *> *SeenObjects = nullptr);
228 
229 /// Collect all potential values \p LI could read into \p PotentialValues. That
230 /// is, the only values read by \p LI are assumed to be known and all are in
231 /// \p PotentialValues. \p PotentialValueOrigins will contain all the
232 /// instructions that might have put a potential value into \p PotentialValues.
233 /// Dependences onto \p QueryingAA are properly tracked, \p
234 /// UsedAssumedInformation will inform the caller if assumed information was
235 /// used.
236 ///
237 /// \returns True if the assumed potential copies are all in \p PotentialValues,
238 ///          false if something went wrong and the copies could not be
239 ///          determined.
240 bool getPotentiallyLoadedValues(
241     Attributor &A, LoadInst &LI, SmallSetVector<Value *, 4> &PotentialValues,
242     SmallSetVector<Instruction *, 4> &PotentialValueOrigins,
243     const AbstractAttribute &QueryingAA, bool &UsedAssumedInformation,
244     bool OnlyExact = false);
245 
246 /// Collect all potential values of the one stored by \p SI into
247 /// \p PotentialCopies. That is, the only copies that were made via the
248 /// store are assumed to be known and all are in \p PotentialCopies. Dependences
249 /// onto \p QueryingAA are properly tracked, \p UsedAssumedInformation will
250 /// inform the caller if assumed information was used.
251 ///
252 /// \returns True if the assumed potential copies are all in \p PotentialCopies,
253 ///          false if something went wrong and the copies could not be
254 ///          determined.
255 bool getPotentialCopiesOfStoredValue(
256     Attributor &A, StoreInst &SI, SmallSetVector<Value *, 4> &PotentialCopies,
257     const AbstractAttribute &QueryingAA, bool &UsedAssumedInformation,
258     bool OnlyExact = false);
259 
260 /// Return true if \p IRP is readonly. This will query respective AAs that
261 /// deduce the information and introduce dependences for \p QueryingAA.
262 bool isAssumedReadOnly(Attributor &A, const IRPosition &IRP,
263                        const AbstractAttribute &QueryingAA, bool &IsKnown);
264 
265 /// Return true if \p IRP is readnone. This will query respective AAs that
266 /// deduce the information and introduce dependences for \p QueryingAA.
267 bool isAssumedReadNone(Attributor &A, const IRPosition &IRP,
268                        const AbstractAttribute &QueryingAA, bool &IsKnown);
269 
270 /// Return true if \p ToI is potentially reachable from \p FromI. The two
271 /// instructions do not need to be in the same function. \p GoBackwardsCB
272 /// can be provided to convey domain knowledge about the "lifespan" the user is
273 /// interested in. By default, the callers of \p FromI are checked as well to
274 /// determine if \p ToI can be reached. If the query is not interested in
275 /// callers beyond a certain point, e.g., a GPU kernel entry or the function
276 /// containing an alloca, the \p GoBackwardsCB should return false.
277 bool isPotentiallyReachable(
278     Attributor &A, const Instruction &FromI, const Instruction &ToI,
279     const AbstractAttribute &QueryingAA,
280     std::function<bool(const Function &F)> GoBackwardsCB = nullptr);
281 
282 /// Same as above but it is sufficient to reach any instruction in \p ToFn.
283 bool isPotentiallyReachable(
284     Attributor &A, const Instruction &FromI, const Function &ToFn,
285     const AbstractAttribute &QueryingAA,
286     std::function<bool(const Function &F)> GoBackwardsCB);
287 
288 } // namespace AA
289 
290 template <>
291 struct DenseMapInfo<AA::ValueAndContext>
292     : public DenseMapInfo<AA::ValueAndContext::Base> {
293   using Base = DenseMapInfo<AA::ValueAndContext::Base>;
294   static inline AA::ValueAndContext getEmptyKey() {
295     return Base::getEmptyKey();
296   }
297   static inline AA::ValueAndContext getTombstoneKey() {
298     return Base::getTombstoneKey();
299   }
300   static unsigned getHashValue(const AA::ValueAndContext &VAC) {
301     return Base::getHashValue(VAC);
302   }
303 
304   static bool isEqual(const AA::ValueAndContext &LHS,
305                       const AA::ValueAndContext &RHS) {
306     return Base::isEqual(LHS, RHS);
307   }
308 };
309 
310 template <>
311 struct DenseMapInfo<AA::ValueScope> : public DenseMapInfo<unsigned char> {
312   using Base = DenseMapInfo<unsigned char>;
313   static inline AA::ValueScope getEmptyKey() {
314     return AA::ValueScope(Base::getEmptyKey());
315   }
316   static inline AA::ValueScope getTombstoneKey() {
317     return AA::ValueScope(Base::getTombstoneKey());
318   }
319   static unsigned getHashValue(const AA::ValueScope &S) {
320     return Base::getHashValue(S);
321   }
322 
323   static bool isEqual(const AA::ValueScope &LHS, const AA::ValueScope &RHS) {
324     return Base::isEqual(LHS, RHS);
325   }
326 };
327 
328 /// The value passed to the line option that defines the maximal initialization
329 /// chain length.
330 extern unsigned MaxInitializationChainLength;
331 
332 ///{
333 enum class ChangeStatus {
334   CHANGED,
335   UNCHANGED,
336 };
337 
338 ChangeStatus operator|(ChangeStatus l, ChangeStatus r);
339 ChangeStatus &operator|=(ChangeStatus &l, ChangeStatus r);
340 ChangeStatus operator&(ChangeStatus l, ChangeStatus r);
341 ChangeStatus &operator&=(ChangeStatus &l, ChangeStatus r);
342 
343 enum class DepClassTy {
344   REQUIRED, ///< The target cannot be valid if the source is not.
345   OPTIONAL, ///< The target may be valid if the source is not.
346   NONE,     ///< Do not track a dependence between source and target.
347 };
348 ///}
349 
350 /// The data structure for the nodes of a dependency graph
351 struct AADepGraphNode {
352 public:
353   virtual ~AADepGraphNode() = default;
354   using DepTy = PointerIntPair<AADepGraphNode *, 1>;
355 
356 protected:
357   /// Set of dependency graph nodes which should be updated if this one
358   /// is updated. The bit encodes if it is optional.
359   TinyPtrVector<DepTy> Deps;
360 
361   static AADepGraphNode *DepGetVal(DepTy &DT) { return DT.getPointer(); }
362   static AbstractAttribute *DepGetValAA(DepTy &DT) {
363     return cast<AbstractAttribute>(DT.getPointer());
364   }
365 
366   operator AbstractAttribute *() { return cast<AbstractAttribute>(this); }
367 
368 public:
369   using iterator =
370       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetVal)>;
371   using aaiterator =
372       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetValAA)>;
373 
374   aaiterator begin() { return aaiterator(Deps.begin(), &DepGetValAA); }
375   aaiterator end() { return aaiterator(Deps.end(), &DepGetValAA); }
376   iterator child_begin() { return iterator(Deps.begin(), &DepGetVal); }
377   iterator child_end() { return iterator(Deps.end(), &DepGetVal); }
378 
379   virtual void print(raw_ostream &OS) const { OS << "AADepNode Impl\n"; }
380   TinyPtrVector<DepTy> &getDeps() { return Deps; }
381 
382   friend struct Attributor;
383   friend struct AADepGraph;
384 };
385 
386 /// The data structure for the dependency graph
387 ///
388 /// Note that in this graph if there is an edge from A to B (A -> B),
389 /// then it means that B depends on A, and when the state of A is
390 /// updated, node B should also be updated
391 struct AADepGraph {
392   AADepGraph() = default;
393   ~AADepGraph() = default;
394 
395   using DepTy = AADepGraphNode::DepTy;
396   static AADepGraphNode *DepGetVal(DepTy &DT) { return DT.getPointer(); }
397   using iterator =
398       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetVal)>;
399 
400   /// There is no root node for the dependency graph. But the SCCIterator
401   /// requires a single entry point, so we maintain a fake("synthetic") root
402   /// node that depends on every node.
403   AADepGraphNode SyntheticRoot;
404   AADepGraphNode *GetEntryNode() { return &SyntheticRoot; }
405 
406   iterator begin() { return SyntheticRoot.child_begin(); }
407   iterator end() { return SyntheticRoot.child_end(); }
408 
409   void viewGraph();
410 
411   /// Dump graph to file
412   void dumpGraph();
413 
414   /// Print dependency graph
415   void print();
416 };
417 
418 /// Helper to describe and deal with positions in the LLVM-IR.
419 ///
420 /// A position in the IR is described by an anchor value and an "offset" that
421 /// could be the argument number, for call sites and arguments, or an indicator
422 /// of the "position kind". The kinds, specified in the Kind enum below, include
423 /// the locations in the attribute list, i.a., function scope and return value,
424 /// as well as a distinction between call sites and functions. Finally, there
425 /// are floating values that do not have a corresponding attribute list
426 /// position.
427 struct IRPosition {
428   // NOTE: In the future this definition can be changed to support recursive
429   // functions.
430   using CallBaseContext = CallBase;
431 
432   /// The positions we distinguish in the IR.
433   enum Kind : char {
434     IRP_INVALID,  ///< An invalid position.
435     IRP_FLOAT,    ///< A position that is not associated with a spot suitable
436                   ///< for attributes. This could be any value or instruction.
437     IRP_RETURNED, ///< An attribute for the function return value.
438     IRP_CALL_SITE_RETURNED, ///< An attribute for a call site return value.
439     IRP_FUNCTION,           ///< An attribute for a function (scope).
440     IRP_CALL_SITE,          ///< An attribute for a call site (function scope).
441     IRP_ARGUMENT,           ///< An attribute for a function argument.
442     IRP_CALL_SITE_ARGUMENT, ///< An attribute for a call site argument.
443   };
444 
445   /// Default constructor available to create invalid positions implicitly. All
446   /// other positions need to be created explicitly through the appropriate
447   /// static member function.
448   IRPosition() : Enc(nullptr, ENC_VALUE) { verify(); }
449 
450   /// Create a position describing the value of \p V.
451   static const IRPosition value(const Value &V,
452                                 const CallBaseContext *CBContext = nullptr) {
453     if (auto *Arg = dyn_cast<Argument>(&V))
454       return IRPosition::argument(*Arg, CBContext);
455     if (auto *CB = dyn_cast<CallBase>(&V))
456       return IRPosition::callsite_returned(*CB);
457     return IRPosition(const_cast<Value &>(V), IRP_FLOAT, CBContext);
458   }
459 
460   /// Create a position describing the instruction \p I. This is different from
461   /// the value version because call sites are treated as intrusctions rather
462   /// than their return value in this function.
463   static const IRPosition inst(const Instruction &I,
464                                const CallBaseContext *CBContext = nullptr) {
465     return IRPosition(const_cast<Instruction &>(I), IRP_FLOAT, CBContext);
466   }
467 
468   /// Create a position describing the function scope of \p F.
469   /// \p CBContext is used for call base specific analysis.
470   static const IRPosition function(const Function &F,
471                                    const CallBaseContext *CBContext = nullptr) {
472     return IRPosition(const_cast<Function &>(F), IRP_FUNCTION, CBContext);
473   }
474 
475   /// Create a position describing the returned value of \p F.
476   /// \p CBContext is used for call base specific analysis.
477   static const IRPosition returned(const Function &F,
478                                    const CallBaseContext *CBContext = nullptr) {
479     return IRPosition(const_cast<Function &>(F), IRP_RETURNED, CBContext);
480   }
481 
482   /// Create a position describing the argument \p Arg.
483   /// \p CBContext is used for call base specific analysis.
484   static const IRPosition argument(const Argument &Arg,
485                                    const CallBaseContext *CBContext = nullptr) {
486     return IRPosition(const_cast<Argument &>(Arg), IRP_ARGUMENT, CBContext);
487   }
488 
489   /// Create a position describing the function scope of \p CB.
490   static const IRPosition callsite_function(const CallBase &CB) {
491     return IRPosition(const_cast<CallBase &>(CB), IRP_CALL_SITE);
492   }
493 
494   /// Create a position describing the returned value of \p CB.
495   static const IRPosition callsite_returned(const CallBase &CB) {
496     return IRPosition(const_cast<CallBase &>(CB), IRP_CALL_SITE_RETURNED);
497   }
498 
499   /// Create a position describing the argument of \p CB at position \p ArgNo.
500   static const IRPosition callsite_argument(const CallBase &CB,
501                                             unsigned ArgNo) {
502     return IRPosition(const_cast<Use &>(CB.getArgOperandUse(ArgNo)),
503                       IRP_CALL_SITE_ARGUMENT);
504   }
505 
506   /// Create a position describing the argument of \p ACS at position \p ArgNo.
507   static const IRPosition callsite_argument(AbstractCallSite ACS,
508                                             unsigned ArgNo) {
509     if (ACS.getNumArgOperands() <= ArgNo)
510       return IRPosition();
511     int CSArgNo = ACS.getCallArgOperandNo(ArgNo);
512     if (CSArgNo >= 0)
513       return IRPosition::callsite_argument(
514           cast<CallBase>(*ACS.getInstruction()), CSArgNo);
515     return IRPosition();
516   }
517 
518   /// Create a position with function scope matching the "context" of \p IRP.
519   /// If \p IRP is a call site (see isAnyCallSitePosition()) then the result
520   /// will be a call site position, otherwise the function position of the
521   /// associated function.
522   static const IRPosition
523   function_scope(const IRPosition &IRP,
524                  const CallBaseContext *CBContext = nullptr) {
525     if (IRP.isAnyCallSitePosition()) {
526       return IRPosition::callsite_function(
527           cast<CallBase>(IRP.getAnchorValue()));
528     }
529     assert(IRP.getAssociatedFunction());
530     return IRPosition::function(*IRP.getAssociatedFunction(), CBContext);
531   }
532 
533   bool operator==(const IRPosition &RHS) const {
534     return Enc == RHS.Enc && RHS.CBContext == CBContext;
535   }
536   bool operator!=(const IRPosition &RHS) const { return !(*this == RHS); }
537 
538   /// Return the value this abstract attribute is anchored with.
539   ///
540   /// The anchor value might not be the associated value if the latter is not
541   /// sufficient to determine where arguments will be manifested. This is, so
542   /// far, only the case for call site arguments as the value is not sufficient
543   /// to pinpoint them. Instead, we can use the call site as an anchor.
544   Value &getAnchorValue() const {
545     switch (getEncodingBits()) {
546     case ENC_VALUE:
547     case ENC_RETURNED_VALUE:
548     case ENC_FLOATING_FUNCTION:
549       return *getAsValuePtr();
550     case ENC_CALL_SITE_ARGUMENT_USE:
551       return *(getAsUsePtr()->getUser());
552     default:
553       llvm_unreachable("Unkown encoding!");
554     };
555   }
556 
557   /// Return the associated function, if any.
558   Function *getAssociatedFunction() const {
559     if (auto *CB = dyn_cast<CallBase>(&getAnchorValue())) {
560       // We reuse the logic that associates callback calles to arguments of a
561       // call site here to identify the callback callee as the associated
562       // function.
563       if (Argument *Arg = getAssociatedArgument())
564         return Arg->getParent();
565       return CB->getCalledFunction();
566     }
567     return getAnchorScope();
568   }
569 
570   /// Return the associated argument, if any.
571   Argument *getAssociatedArgument() const;
572 
573   /// Return true if the position refers to a function interface, that is the
574   /// function scope, the function return, or an argument.
575   bool isFnInterfaceKind() const {
576     switch (getPositionKind()) {
577     case IRPosition::IRP_FUNCTION:
578     case IRPosition::IRP_RETURNED:
579     case IRPosition::IRP_ARGUMENT:
580       return true;
581     default:
582       return false;
583     }
584   }
585 
586   /// Return the Function surrounding the anchor value.
587   Function *getAnchorScope() const {
588     Value &V = getAnchorValue();
589     if (isa<Function>(V))
590       return &cast<Function>(V);
591     if (isa<Argument>(V))
592       return cast<Argument>(V).getParent();
593     if (isa<Instruction>(V))
594       return cast<Instruction>(V).getFunction();
595     return nullptr;
596   }
597 
598   /// Return the context instruction, if any.
599   Instruction *getCtxI() const {
600     Value &V = getAnchorValue();
601     if (auto *I = dyn_cast<Instruction>(&V))
602       return I;
603     if (auto *Arg = dyn_cast<Argument>(&V))
604       if (!Arg->getParent()->isDeclaration())
605         return &Arg->getParent()->getEntryBlock().front();
606     if (auto *F = dyn_cast<Function>(&V))
607       if (!F->isDeclaration())
608         return &(F->getEntryBlock().front());
609     return nullptr;
610   }
611 
612   /// Return the value this abstract attribute is associated with.
613   Value &getAssociatedValue() const {
614     if (getCallSiteArgNo() < 0 || isa<Argument>(&getAnchorValue()))
615       return getAnchorValue();
616     assert(isa<CallBase>(&getAnchorValue()) && "Expected a call base!");
617     return *cast<CallBase>(&getAnchorValue())
618                 ->getArgOperand(getCallSiteArgNo());
619   }
620 
621   /// Return the type this abstract attribute is associated with.
622   Type *getAssociatedType() const {
623     if (getPositionKind() == IRPosition::IRP_RETURNED)
624       return getAssociatedFunction()->getReturnType();
625     return getAssociatedValue().getType();
626   }
627 
628   /// Return the callee argument number of the associated value if it is an
629   /// argument or call site argument, otherwise a negative value. In contrast to
630   /// `getCallSiteArgNo` this method will always return the "argument number"
631   /// from the perspective of the callee. This may not the same as the call site
632   /// if this is a callback call.
633   int getCalleeArgNo() const {
634     return getArgNo(/* CallbackCalleeArgIfApplicable */ true);
635   }
636 
637   /// Return the call site argument number of the associated value if it is an
638   /// argument or call site argument, otherwise a negative value. In contrast to
639   /// `getCalleArgNo` this method will always return the "operand number" from
640   /// the perspective of the call site. This may not the same as the callee
641   /// perspective if this is a callback call.
642   int getCallSiteArgNo() const {
643     return getArgNo(/* CallbackCalleeArgIfApplicable */ false);
644   }
645 
646   /// Return the index in the attribute list for this position.
647   unsigned getAttrIdx() const {
648     switch (getPositionKind()) {
649     case IRPosition::IRP_INVALID:
650     case IRPosition::IRP_FLOAT:
651       break;
652     case IRPosition::IRP_FUNCTION:
653     case IRPosition::IRP_CALL_SITE:
654       return AttributeList::FunctionIndex;
655     case IRPosition::IRP_RETURNED:
656     case IRPosition::IRP_CALL_SITE_RETURNED:
657       return AttributeList::ReturnIndex;
658     case IRPosition::IRP_ARGUMENT:
659     case IRPosition::IRP_CALL_SITE_ARGUMENT:
660       return getCallSiteArgNo() + AttributeList::FirstArgIndex;
661     }
662     llvm_unreachable(
663         "There is no attribute index for a floating or invalid position!");
664   }
665 
666   /// Return the associated position kind.
667   Kind getPositionKind() const {
668     char EncodingBits = getEncodingBits();
669     if (EncodingBits == ENC_CALL_SITE_ARGUMENT_USE)
670       return IRP_CALL_SITE_ARGUMENT;
671     if (EncodingBits == ENC_FLOATING_FUNCTION)
672       return IRP_FLOAT;
673 
674     Value *V = getAsValuePtr();
675     if (!V)
676       return IRP_INVALID;
677     if (isa<Argument>(V))
678       return IRP_ARGUMENT;
679     if (isa<Function>(V))
680       return isReturnPosition(EncodingBits) ? IRP_RETURNED : IRP_FUNCTION;
681     if (isa<CallBase>(V))
682       return isReturnPosition(EncodingBits) ? IRP_CALL_SITE_RETURNED
683                                             : IRP_CALL_SITE;
684     return IRP_FLOAT;
685   }
686 
687   /// TODO: Figure out if the attribute related helper functions should live
688   ///       here or somewhere else.
689 
690   /// Return true if any kind in \p AKs existing in the IR at a position that
691   /// will affect this one. See also getAttrs(...).
692   /// \param IgnoreSubsumingPositions Flag to determine if subsuming positions,
693   ///                                 e.g., the function position if this is an
694   ///                                 argument position, should be ignored.
695   bool hasAttr(ArrayRef<Attribute::AttrKind> AKs,
696                bool IgnoreSubsumingPositions = false,
697                Attributor *A = nullptr) const;
698 
699   /// Return the attributes of any kind in \p AKs existing in the IR at a
700   /// position that will affect this one. While each position can only have a
701   /// single attribute of any kind in \p AKs, there are "subsuming" positions
702   /// that could have an attribute as well. This method returns all attributes
703   /// found in \p Attrs.
704   /// \param IgnoreSubsumingPositions Flag to determine if subsuming positions,
705   ///                                 e.g., the function position if this is an
706   ///                                 argument position, should be ignored.
707   void getAttrs(ArrayRef<Attribute::AttrKind> AKs,
708                 SmallVectorImpl<Attribute> &Attrs,
709                 bool IgnoreSubsumingPositions = false,
710                 Attributor *A = nullptr) const;
711 
712   /// Remove the attribute of kind \p AKs existing in the IR at this position.
713   void removeAttrs(ArrayRef<Attribute::AttrKind> AKs) const {
714     if (getPositionKind() == IRP_INVALID || getPositionKind() == IRP_FLOAT)
715       return;
716 
717     AttributeList AttrList;
718     auto *CB = dyn_cast<CallBase>(&getAnchorValue());
719     if (CB)
720       AttrList = CB->getAttributes();
721     else
722       AttrList = getAssociatedFunction()->getAttributes();
723 
724     LLVMContext &Ctx = getAnchorValue().getContext();
725     for (Attribute::AttrKind AK : AKs)
726       AttrList = AttrList.removeAttributeAtIndex(Ctx, getAttrIdx(), AK);
727 
728     if (CB)
729       CB->setAttributes(AttrList);
730     else
731       getAssociatedFunction()->setAttributes(AttrList);
732   }
733 
734   bool isAnyCallSitePosition() const {
735     switch (getPositionKind()) {
736     case IRPosition::IRP_CALL_SITE:
737     case IRPosition::IRP_CALL_SITE_RETURNED:
738     case IRPosition::IRP_CALL_SITE_ARGUMENT:
739       return true;
740     default:
741       return false;
742     }
743   }
744 
745   /// Return true if the position is an argument or call site argument.
746   bool isArgumentPosition() const {
747     switch (getPositionKind()) {
748     case IRPosition::IRP_ARGUMENT:
749     case IRPosition::IRP_CALL_SITE_ARGUMENT:
750       return true;
751     default:
752       return false;
753     }
754   }
755 
756   /// Return the same position without the call base context.
757   IRPosition stripCallBaseContext() const {
758     IRPosition Result = *this;
759     Result.CBContext = nullptr;
760     return Result;
761   }
762 
763   /// Get the call base context from the position.
764   const CallBaseContext *getCallBaseContext() const { return CBContext; }
765 
766   /// Check if the position has any call base context.
767   bool hasCallBaseContext() const { return CBContext != nullptr; }
768 
769   /// Special DenseMap key values.
770   ///
771   ///{
772   static const IRPosition EmptyKey;
773   static const IRPosition TombstoneKey;
774   ///}
775 
776   /// Conversion into a void * to allow reuse of pointer hashing.
777   operator void *() const { return Enc.getOpaqueValue(); }
778 
779 private:
780   /// Private constructor for special values only!
781   explicit IRPosition(void *Ptr, const CallBaseContext *CBContext = nullptr)
782       : CBContext(CBContext) {
783     Enc.setFromOpaqueValue(Ptr);
784   }
785 
786   /// IRPosition anchored at \p AnchorVal with kind/argument numbet \p PK.
787   explicit IRPosition(Value &AnchorVal, Kind PK,
788                       const CallBaseContext *CBContext = nullptr)
789       : CBContext(CBContext) {
790     switch (PK) {
791     case IRPosition::IRP_INVALID:
792       llvm_unreachable("Cannot create invalid IRP with an anchor value!");
793       break;
794     case IRPosition::IRP_FLOAT:
795       // Special case for floating functions.
796       if (isa<Function>(AnchorVal) || isa<CallBase>(AnchorVal))
797         Enc = {&AnchorVal, ENC_FLOATING_FUNCTION};
798       else
799         Enc = {&AnchorVal, ENC_VALUE};
800       break;
801     case IRPosition::IRP_FUNCTION:
802     case IRPosition::IRP_CALL_SITE:
803       Enc = {&AnchorVal, ENC_VALUE};
804       break;
805     case IRPosition::IRP_RETURNED:
806     case IRPosition::IRP_CALL_SITE_RETURNED:
807       Enc = {&AnchorVal, ENC_RETURNED_VALUE};
808       break;
809     case IRPosition::IRP_ARGUMENT:
810       Enc = {&AnchorVal, ENC_VALUE};
811       break;
812     case IRPosition::IRP_CALL_SITE_ARGUMENT:
813       llvm_unreachable(
814           "Cannot create call site argument IRP with an anchor value!");
815       break;
816     }
817     verify();
818   }
819 
820   /// Return the callee argument number of the associated value if it is an
821   /// argument or call site argument. See also `getCalleeArgNo` and
822   /// `getCallSiteArgNo`.
823   int getArgNo(bool CallbackCalleeArgIfApplicable) const {
824     if (CallbackCalleeArgIfApplicable)
825       if (Argument *Arg = getAssociatedArgument())
826         return Arg->getArgNo();
827     switch (getPositionKind()) {
828     case IRPosition::IRP_ARGUMENT:
829       return cast<Argument>(getAsValuePtr())->getArgNo();
830     case IRPosition::IRP_CALL_SITE_ARGUMENT: {
831       Use &U = *getAsUsePtr();
832       return cast<CallBase>(U.getUser())->getArgOperandNo(&U);
833     }
834     default:
835       return -1;
836     }
837   }
838 
839   /// IRPosition for the use \p U. The position kind \p PK needs to be
840   /// IRP_CALL_SITE_ARGUMENT, the anchor value is the user, the associated value
841   /// the used value.
842   explicit IRPosition(Use &U, Kind PK) {
843     assert(PK == IRP_CALL_SITE_ARGUMENT &&
844            "Use constructor is for call site arguments only!");
845     Enc = {&U, ENC_CALL_SITE_ARGUMENT_USE};
846     verify();
847   }
848 
849   /// Verify internal invariants.
850   void verify();
851 
852   /// Return the attributes of kind \p AK existing in the IR as attribute.
853   bool getAttrsFromIRAttr(Attribute::AttrKind AK,
854                           SmallVectorImpl<Attribute> &Attrs) const;
855 
856   /// Return the attributes of kind \p AK existing in the IR as operand bundles
857   /// of an llvm.assume.
858   bool getAttrsFromAssumes(Attribute::AttrKind AK,
859                            SmallVectorImpl<Attribute> &Attrs,
860                            Attributor &A) const;
861 
862   /// Return the underlying pointer as Value *, valid for all positions but
863   /// IRP_CALL_SITE_ARGUMENT.
864   Value *getAsValuePtr() const {
865     assert(getEncodingBits() != ENC_CALL_SITE_ARGUMENT_USE &&
866            "Not a value pointer!");
867     return reinterpret_cast<Value *>(Enc.getPointer());
868   }
869 
870   /// Return the underlying pointer as Use *, valid only for
871   /// IRP_CALL_SITE_ARGUMENT positions.
872   Use *getAsUsePtr() const {
873     assert(getEncodingBits() == ENC_CALL_SITE_ARGUMENT_USE &&
874            "Not a value pointer!");
875     return reinterpret_cast<Use *>(Enc.getPointer());
876   }
877 
878   /// Return true if \p EncodingBits describe a returned or call site returned
879   /// position.
880   static bool isReturnPosition(char EncodingBits) {
881     return EncodingBits == ENC_RETURNED_VALUE;
882   }
883 
884   /// Return true if the encoding bits describe a returned or call site returned
885   /// position.
886   bool isReturnPosition() const { return isReturnPosition(getEncodingBits()); }
887 
888   /// The encoding of the IRPosition is a combination of a pointer and two
889   /// encoding bits. The values of the encoding bits are defined in the enum
890   /// below. The pointer is either a Value* (for the first three encoding bit
891   /// combinations) or Use* (for ENC_CALL_SITE_ARGUMENT_USE).
892   ///
893   ///{
894   enum {
895     ENC_VALUE = 0b00,
896     ENC_RETURNED_VALUE = 0b01,
897     ENC_FLOATING_FUNCTION = 0b10,
898     ENC_CALL_SITE_ARGUMENT_USE = 0b11,
899   };
900 
901   // Reserve the maximal amount of bits so there is no need to mask out the
902   // remaining ones. We will not encode anything else in the pointer anyway.
903   static constexpr int NumEncodingBits =
904       PointerLikeTypeTraits<void *>::NumLowBitsAvailable;
905   static_assert(NumEncodingBits >= 2, "At least two bits are required!");
906 
907   /// The pointer with the encoding bits.
908   PointerIntPair<void *, NumEncodingBits, char> Enc;
909   ///}
910 
911   /// Call base context. Used for callsite specific analysis.
912   const CallBaseContext *CBContext = nullptr;
913 
914   /// Return the encoding bits.
915   char getEncodingBits() const { return Enc.getInt(); }
916 };
917 
918 /// Helper that allows IRPosition as a key in a DenseMap.
919 template <> struct DenseMapInfo<IRPosition> {
920   static inline IRPosition getEmptyKey() { return IRPosition::EmptyKey; }
921   static inline IRPosition getTombstoneKey() {
922     return IRPosition::TombstoneKey;
923   }
924   static unsigned getHashValue(const IRPosition &IRP) {
925     return (DenseMapInfo<void *>::getHashValue(IRP) << 4) ^
926            (DenseMapInfo<Value *>::getHashValue(IRP.getCallBaseContext()));
927   }
928 
929   static bool isEqual(const IRPosition &a, const IRPosition &b) {
930     return a == b;
931   }
932 };
933 
934 /// A visitor class for IR positions.
935 ///
936 /// Given a position P, the SubsumingPositionIterator allows to visit "subsuming
937 /// positions" wrt. attributes/information. Thus, if a piece of information
938 /// holds for a subsuming position, it also holds for the position P.
939 ///
940 /// The subsuming positions always include the initial position and then,
941 /// depending on the position kind, additionally the following ones:
942 /// - for IRP_RETURNED:
943 ///   - the function (IRP_FUNCTION)
944 /// - for IRP_ARGUMENT:
945 ///   - the function (IRP_FUNCTION)
946 /// - for IRP_CALL_SITE:
947 ///   - the callee (IRP_FUNCTION), if known
948 /// - for IRP_CALL_SITE_RETURNED:
949 ///   - the callee (IRP_RETURNED), if known
950 ///   - the call site (IRP_FUNCTION)
951 ///   - the callee (IRP_FUNCTION), if known
952 /// - for IRP_CALL_SITE_ARGUMENT:
953 ///   - the argument of the callee (IRP_ARGUMENT), if known
954 ///   - the callee (IRP_FUNCTION), if known
955 ///   - the position the call site argument is associated with if it is not
956 ///     anchored to the call site, e.g., if it is an argument then the argument
957 ///     (IRP_ARGUMENT)
958 class SubsumingPositionIterator {
959   SmallVector<IRPosition, 4> IRPositions;
960   using iterator = decltype(IRPositions)::iterator;
961 
962 public:
963   SubsumingPositionIterator(const IRPosition &IRP);
964   iterator begin() { return IRPositions.begin(); }
965   iterator end() { return IRPositions.end(); }
966 };
967 
968 /// Wrapper for FunctoinAnalysisManager.
969 struct AnalysisGetter {
970   template <typename Analysis>
971   typename Analysis::Result *getAnalysis(const Function &F) {
972     if (!FAM || !F.getParent())
973       return nullptr;
974     return &FAM->getResult<Analysis>(const_cast<Function &>(F));
975   }
976 
977   AnalysisGetter(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
978   AnalysisGetter() = default;
979 
980 private:
981   FunctionAnalysisManager *FAM = nullptr;
982 };
983 
984 /// Data structure to hold cached (LLVM-IR) information.
985 ///
986 /// All attributes are given an InformationCache object at creation time to
987 /// avoid inspection of the IR by all of them individually. This default
988 /// InformationCache will hold information required by 'default' attributes,
989 /// thus the ones deduced when Attributor::identifyDefaultAbstractAttributes(..)
990 /// is called.
991 ///
992 /// If custom abstract attributes, registered manually through
993 /// Attributor::registerAA(...), need more information, especially if it is not
994 /// reusable, it is advised to inherit from the InformationCache and cast the
995 /// instance down in the abstract attributes.
996 struct InformationCache {
997   InformationCache(const Module &M, AnalysisGetter &AG,
998                    BumpPtrAllocator &Allocator, SetVector<Function *> *CGSCC)
999       : DL(M.getDataLayout()), Allocator(Allocator),
1000         Explorer(
1001             /* ExploreInterBlock */ true, /* ExploreCFGForward */ true,
1002             /* ExploreCFGBackward */ true,
1003             /* LIGetter */
1004             [&](const Function &F) { return AG.getAnalysis<LoopAnalysis>(F); },
1005             /* DTGetter */
1006             [&](const Function &F) {
1007               return AG.getAnalysis<DominatorTreeAnalysis>(F);
1008             },
1009             /* PDTGetter */
1010             [&](const Function &F) {
1011               return AG.getAnalysis<PostDominatorTreeAnalysis>(F);
1012             }),
1013         AG(AG), TargetTriple(M.getTargetTriple()) {
1014     if (CGSCC)
1015       initializeModuleSlice(*CGSCC);
1016   }
1017 
1018   ~InformationCache() {
1019     // The FunctionInfo objects are allocated via a BumpPtrAllocator, we call
1020     // the destructor manually.
1021     for (auto &It : FuncInfoMap)
1022       It.getSecond()->~FunctionInfo();
1023   }
1024 
1025   /// Apply \p CB to all uses of \p F. If \p LookThroughConstantExprUses is
1026   /// true, constant expression users are not given to \p CB but their uses are
1027   /// traversed transitively.
1028   template <typename CBTy>
1029   static void foreachUse(Function &F, CBTy CB,
1030                          bool LookThroughConstantExprUses = true) {
1031     SmallVector<Use *, 8> Worklist(make_pointer_range(F.uses()));
1032 
1033     for (unsigned Idx = 0; Idx < Worklist.size(); ++Idx) {
1034       Use &U = *Worklist[Idx];
1035 
1036       // Allow use in constant bitcasts and simply look through them.
1037       if (LookThroughConstantExprUses && isa<ConstantExpr>(U.getUser())) {
1038         for (Use &CEU : cast<ConstantExpr>(U.getUser())->uses())
1039           Worklist.push_back(&CEU);
1040         continue;
1041       }
1042 
1043       CB(U);
1044     }
1045   }
1046 
1047   /// Initialize the ModuleSlice member based on \p SCC. ModuleSlices contains
1048   /// (a subset of) all functions that we can look at during this SCC traversal.
1049   /// This includes functions (transitively) called from the SCC and the
1050   /// (transitive) callers of SCC functions. We also can look at a function if
1051   /// there is a "reference edge", i.a., if the function somehow uses (!=calls)
1052   /// a function in the SCC or a caller of a function in the SCC.
1053   void initializeModuleSlice(SetVector<Function *> &SCC) {
1054     ModuleSlice.insert(SCC.begin(), SCC.end());
1055 
1056     SmallPtrSet<Function *, 16> Seen;
1057     SmallVector<Function *, 16> Worklist(SCC.begin(), SCC.end());
1058     while (!Worklist.empty()) {
1059       Function *F = Worklist.pop_back_val();
1060       ModuleSlice.insert(F);
1061 
1062       for (Instruction &I : instructions(*F))
1063         if (auto *CB = dyn_cast<CallBase>(&I))
1064           if (Function *Callee = CB->getCalledFunction())
1065             if (Seen.insert(Callee).second)
1066               Worklist.push_back(Callee);
1067     }
1068 
1069     Seen.clear();
1070     Worklist.append(SCC.begin(), SCC.end());
1071     while (!Worklist.empty()) {
1072       Function *F = Worklist.pop_back_val();
1073       ModuleSlice.insert(F);
1074 
1075       // Traverse all transitive uses.
1076       foreachUse(*F, [&](Use &U) {
1077         if (auto *UsrI = dyn_cast<Instruction>(U.getUser()))
1078           if (Seen.insert(UsrI->getFunction()).second)
1079             Worklist.push_back(UsrI->getFunction());
1080       });
1081     }
1082   }
1083 
1084   /// The slice of the module we are allowed to look at.
1085   SmallPtrSet<Function *, 8> ModuleSlice;
1086 
1087   /// A vector type to hold instructions.
1088   using InstructionVectorTy = SmallVector<Instruction *, 8>;
1089 
1090   /// A map type from opcodes to instructions with this opcode.
1091   using OpcodeInstMapTy = DenseMap<unsigned, InstructionVectorTy *>;
1092 
1093   /// Return the map that relates "interesting" opcodes with all instructions
1094   /// with that opcode in \p F.
1095   OpcodeInstMapTy &getOpcodeInstMapForFunction(const Function &F) {
1096     return getFunctionInfo(F).OpcodeInstMap;
1097   }
1098 
1099   /// Return the instructions in \p F that may read or write memory.
1100   InstructionVectorTy &getReadOrWriteInstsForFunction(const Function &F) {
1101     return getFunctionInfo(F).RWInsts;
1102   }
1103 
1104   /// Return MustBeExecutedContextExplorer
1105   MustBeExecutedContextExplorer &getMustBeExecutedContextExplorer() {
1106     return Explorer;
1107   }
1108 
1109   /// Return TargetLibraryInfo for function \p F.
1110   TargetLibraryInfo *getTargetLibraryInfoForFunction(const Function &F) {
1111     return AG.getAnalysis<TargetLibraryAnalysis>(F);
1112   }
1113 
1114   /// Return AliasAnalysis Result for function \p F.
1115   AAResults *getAAResultsForFunction(const Function &F);
1116 
1117   /// Return true if \p Arg is involved in a must-tail call, thus the argument
1118   /// of the caller or callee.
1119   bool isInvolvedInMustTailCall(const Argument &Arg) {
1120     FunctionInfo &FI = getFunctionInfo(*Arg.getParent());
1121     return FI.CalledViaMustTail || FI.ContainsMustTailCall;
1122   }
1123 
1124   bool isOnlyUsedByAssume(const Instruction &I) const {
1125     return AssumeOnlyValues.contains(&I);
1126   }
1127 
1128   /// Return the analysis result from a pass \p AP for function \p F.
1129   template <typename AP>
1130   typename AP::Result *getAnalysisResultForFunction(const Function &F) {
1131     return AG.getAnalysis<AP>(F);
1132   }
1133 
1134   /// Return datalayout used in the module.
1135   const DataLayout &getDL() { return DL; }
1136 
1137   /// Return the map conaining all the knowledge we have from `llvm.assume`s.
1138   const RetainedKnowledgeMap &getKnowledgeMap() const { return KnowledgeMap; }
1139 
1140   /// Return if \p To is potentially reachable form \p From or not
1141   /// If the same query was answered, return cached result
1142   bool getPotentiallyReachable(const Instruction &From, const Instruction &To) {
1143     auto KeyPair = std::make_pair(&From, &To);
1144     auto Iter = PotentiallyReachableMap.find(KeyPair);
1145     if (Iter != PotentiallyReachableMap.end())
1146       return Iter->second;
1147     const Function &F = *From.getFunction();
1148     bool Result = true;
1149     if (From.getFunction() == To.getFunction())
1150       Result = isPotentiallyReachable(&From, &To, nullptr,
1151                                       AG.getAnalysis<DominatorTreeAnalysis>(F),
1152                                       AG.getAnalysis<LoopAnalysis>(F));
1153     PotentiallyReachableMap.insert(std::make_pair(KeyPair, Result));
1154     return Result;
1155   }
1156 
1157   /// Check whether \p F is part of module slice.
1158   bool isInModuleSlice(const Function &F) {
1159     return ModuleSlice.count(const_cast<Function *>(&F));
1160   }
1161 
1162   /// Return true if the stack (llvm::Alloca) can be accessed by other threads.
1163   bool stackIsAccessibleByOtherThreads() { return !targetIsGPU(); }
1164 
1165   /// Return true if the target is a GPU.
1166   bool targetIsGPU() {
1167     return TargetTriple.isAMDGPU() || TargetTriple.isNVPTX();
1168   }
1169 
1170 private:
1171   struct FunctionInfo {
1172     ~FunctionInfo();
1173 
1174     /// A nested map that remembers all instructions in a function with a
1175     /// certain instruction opcode (Instruction::getOpcode()).
1176     OpcodeInstMapTy OpcodeInstMap;
1177 
1178     /// A map from functions to their instructions that may read or write
1179     /// memory.
1180     InstructionVectorTy RWInsts;
1181 
1182     /// Function is called by a `musttail` call.
1183     bool CalledViaMustTail;
1184 
1185     /// Function contains a `musttail` call.
1186     bool ContainsMustTailCall;
1187   };
1188 
1189   /// A map type from functions to informatio about it.
1190   DenseMap<const Function *, FunctionInfo *> FuncInfoMap;
1191 
1192   /// Return information about the function \p F, potentially by creating it.
1193   FunctionInfo &getFunctionInfo(const Function &F) {
1194     FunctionInfo *&FI = FuncInfoMap[&F];
1195     if (!FI) {
1196       FI = new (Allocator) FunctionInfo();
1197       initializeInformationCache(F, *FI);
1198     }
1199     return *FI;
1200   }
1201 
1202   /// Initialize the function information cache \p FI for the function \p F.
1203   ///
1204   /// This method needs to be called for all function that might be looked at
1205   /// through the information cache interface *prior* to looking at them.
1206   void initializeInformationCache(const Function &F, FunctionInfo &FI);
1207 
1208   /// The datalayout used in the module.
1209   const DataLayout &DL;
1210 
1211   /// The allocator used to allocate memory, e.g. for `FunctionInfo`s.
1212   BumpPtrAllocator &Allocator;
1213 
1214   /// MustBeExecutedContextExplorer
1215   MustBeExecutedContextExplorer Explorer;
1216 
1217   /// A map with knowledge retained in `llvm.assume` instructions.
1218   RetainedKnowledgeMap KnowledgeMap;
1219 
1220   /// A container for all instructions that are only used by `llvm.assume`.
1221   SetVector<const Instruction *> AssumeOnlyValues;
1222 
1223   /// Getters for analysis.
1224   AnalysisGetter &AG;
1225 
1226   /// Set of inlineable functions
1227   SmallPtrSet<const Function *, 8> InlineableFunctions;
1228 
1229   /// A map for caching results of queries for isPotentiallyReachable
1230   DenseMap<std::pair<const Instruction *, const Instruction *>, bool>
1231       PotentiallyReachableMap;
1232 
1233   /// The triple describing the target machine.
1234   Triple TargetTriple;
1235 
1236   /// Give the Attributor access to the members so
1237   /// Attributor::identifyDefaultAbstractAttributes(...) can initialize them.
1238   friend struct Attributor;
1239 };
1240 
1241 /// Configuration for the Attributor.
1242 struct AttributorConfig {
1243 
1244   AttributorConfig(CallGraphUpdater &CGUpdater) : CGUpdater(CGUpdater) {}
1245 
1246   /// Is the user of the Attributor a module pass or not. This determines what
1247   /// IR we can look at and modify. If it is a module pass we might deduce facts
1248   /// outside the initial function set and modify functions outside that set,
1249   /// but only as part of the optimization of the functions in the initial
1250   /// function set. For CGSCC passes we can look at the IR of the module slice
1251   /// but never run any deduction, or perform any modification, outside the
1252   /// initial function set (which we assume is the SCC).
1253   bool IsModulePass = true;
1254 
1255   /// Flag to determine if we can delete functions or keep dead ones around.
1256   bool DeleteFns = true;
1257 
1258   /// Flag to determine if we rewrite function signatures.
1259   bool RewriteSignatures = true;
1260 
1261   /// Flag to determine if we want to initialize all default AAs for an internal
1262   /// function marked live.
1263   /// TODO: This should probably be a callback, or maybe
1264   /// identifyDefaultAbstractAttributes should be virtual, something to allow
1265   /// customizable lazy initialization for internal functions.
1266   bool DefaultInitializeLiveInternals = true;
1267 
1268   /// Helper to update an underlying call graph and to delete functions.
1269   CallGraphUpdater &CGUpdater;
1270 
1271   /// If not null, a set limiting the attribute opportunities.
1272   DenseSet<const char *> *Allowed = nullptr;
1273 
1274   /// Maximum number of iterations to run until fixpoint.
1275   Optional<unsigned> MaxFixpointIterations = None;
1276 
1277   /// A callback function that returns an ORE object from a Function pointer.
1278   ///{
1279   using OptimizationRemarkGetter =
1280       function_ref<OptimizationRemarkEmitter &(Function *)>;
1281   OptimizationRemarkGetter OREGetter = nullptr;
1282   ///}
1283 
1284   /// The name of the pass running the attributor, used to emit remarks.
1285   const char *PassName = nullptr;
1286 };
1287 
1288 /// The fixpoint analysis framework that orchestrates the attribute deduction.
1289 ///
1290 /// The Attributor provides a general abstract analysis framework (guided
1291 /// fixpoint iteration) as well as helper functions for the deduction of
1292 /// (LLVM-IR) attributes. However, also other code properties can be deduced,
1293 /// propagated, and ultimately manifested through the Attributor framework. This
1294 /// is particularly useful if these properties interact with attributes and a
1295 /// co-scheduled deduction allows to improve the solution. Even if not, thus if
1296 /// attributes/properties are completely isolated, they should use the
1297 /// Attributor framework to reduce the number of fixpoint iteration frameworks
1298 /// in the code base. Note that the Attributor design makes sure that isolated
1299 /// attributes are not impacted, in any way, by others derived at the same time
1300 /// if there is no cross-reasoning performed.
1301 ///
1302 /// The public facing interface of the Attributor is kept simple and basically
1303 /// allows abstract attributes to one thing, query abstract attributes
1304 /// in-flight. There are two reasons to do this:
1305 ///    a) The optimistic state of one abstract attribute can justify an
1306 ///       optimistic state of another, allowing to framework to end up with an
1307 ///       optimistic (=best possible) fixpoint instead of one based solely on
1308 ///       information in the IR.
1309 ///    b) This avoids reimplementing various kinds of lookups, e.g., to check
1310 ///       for existing IR attributes, in favor of a single lookups interface
1311 ///       provided by an abstract attribute subclass.
1312 ///
1313 /// NOTE: The mechanics of adding a new "concrete" abstract attribute are
1314 ///       described in the file comment.
1315 struct Attributor {
1316 
1317   /// Constructor
1318   ///
1319   /// \param Functions The set of functions we are deriving attributes for.
1320   /// \param InfoCache Cache to hold various information accessible for
1321   ///                  the abstract attributes.
1322   /// \param Configuration The Attributor configuration which determines what
1323   ///                      generic features to use.
1324   Attributor(SetVector<Function *> &Functions, InformationCache &InfoCache,
1325              AttributorConfig Configuration)
1326       : Allocator(InfoCache.Allocator), Functions(Functions),
1327         InfoCache(InfoCache), Configuration(Configuration) {}
1328 
1329   ~Attributor();
1330 
1331   /// Run the analyses until a fixpoint is reached or enforced (timeout).
1332   ///
1333   /// The attributes registered with this Attributor can be used after as long
1334   /// as the Attributor is not destroyed (it owns the attributes now).
1335   ///
1336   /// \Returns CHANGED if the IR was changed, otherwise UNCHANGED.
1337   ChangeStatus run();
1338 
1339   /// Lookup an abstract attribute of type \p AAType at position \p IRP. While
1340   /// no abstract attribute is found equivalent positions are checked, see
1341   /// SubsumingPositionIterator. Thus, the returned abstract attribute
1342   /// might be anchored at a different position, e.g., the callee if \p IRP is a
1343   /// call base.
1344   ///
1345   /// This method is the only (supported) way an abstract attribute can retrieve
1346   /// information from another abstract attribute. As an example, take an
1347   /// abstract attribute that determines the memory access behavior for a
1348   /// argument (readnone, readonly, ...). It should use `getAAFor` to get the
1349   /// most optimistic information for other abstract attributes in-flight, e.g.
1350   /// the one reasoning about the "captured" state for the argument or the one
1351   /// reasoning on the memory access behavior of the function as a whole.
1352   ///
1353   /// If the DepClass enum is set to `DepClassTy::None` the dependence from
1354   /// \p QueryingAA to the return abstract attribute is not automatically
1355   /// recorded. This should only be used if the caller will record the
1356   /// dependence explicitly if necessary, thus if it the returned abstract
1357   /// attribute is used for reasoning. To record the dependences explicitly use
1358   /// the `Attributor::recordDependence` method.
1359   template <typename AAType>
1360   const AAType &getAAFor(const AbstractAttribute &QueryingAA,
1361                          const IRPosition &IRP, DepClassTy DepClass) {
1362     return getOrCreateAAFor<AAType>(IRP, &QueryingAA, DepClass,
1363                                     /* ForceUpdate */ false);
1364   }
1365 
1366   /// Similar to getAAFor but the return abstract attribute will be updated (via
1367   /// `AbstractAttribute::update`) even if it is found in the cache. This is
1368   /// especially useful for AAIsDead as changes in liveness can make updates
1369   /// possible/useful that were not happening before as the abstract attribute
1370   /// was assumed dead.
1371   template <typename AAType>
1372   const AAType &getAndUpdateAAFor(const AbstractAttribute &QueryingAA,
1373                                   const IRPosition &IRP, DepClassTy DepClass) {
1374     return getOrCreateAAFor<AAType>(IRP, &QueryingAA, DepClass,
1375                                     /* ForceUpdate */ true);
1376   }
1377 
1378   /// The version of getAAFor that allows to omit a querying abstract
1379   /// attribute. Using this after Attributor started running is restricted to
1380   /// only the Attributor itself. Initial seeding of AAs can be done via this
1381   /// function.
1382   /// NOTE: ForceUpdate is ignored in any stage other than the update stage.
1383   template <typename AAType>
1384   const AAType &getOrCreateAAFor(IRPosition IRP,
1385                                  const AbstractAttribute *QueryingAA,
1386                                  DepClassTy DepClass, bool ForceUpdate = false,
1387                                  bool UpdateAfterInit = true) {
1388     if (!shouldPropagateCallBaseContext(IRP))
1389       IRP = IRP.stripCallBaseContext();
1390 
1391     if (AAType *AAPtr = lookupAAFor<AAType>(IRP, QueryingAA, DepClass,
1392                                             /* AllowInvalidState */ true)) {
1393       if (ForceUpdate && Phase == AttributorPhase::UPDATE)
1394         updateAA(*AAPtr);
1395       return *AAPtr;
1396     }
1397 
1398     // No matching attribute found, create one.
1399     // Use the static create method.
1400     auto &AA = AAType::createForPosition(IRP, *this);
1401 
1402     // If we are currenty seeding attributes, enforce seeding rules.
1403     if (Phase == AttributorPhase::SEEDING && !shouldSeedAttribute(AA)) {
1404       AA.getState().indicatePessimisticFixpoint();
1405       return AA;
1406     }
1407 
1408     registerAA(AA);
1409 
1410     // For now we ignore naked and optnone functions.
1411     bool Invalidate =
1412         Configuration.Allowed && !Configuration.Allowed->count(&AAType::ID);
1413     const Function *AnchorFn = IRP.getAnchorScope();
1414     if (AnchorFn) {
1415       Invalidate |=
1416           AnchorFn->hasFnAttribute(Attribute::Naked) ||
1417           AnchorFn->hasFnAttribute(Attribute::OptimizeNone) ||
1418           (!isModulePass() && !getInfoCache().isInModuleSlice(*AnchorFn));
1419     }
1420 
1421     // Avoid too many nested initializations to prevent a stack overflow.
1422     Invalidate |= InitializationChainLength > MaxInitializationChainLength;
1423 
1424     // Bootstrap the new attribute with an initial update to propagate
1425     // information, e.g., function -> call site. If it is not on a given
1426     // Allowed we will not perform updates at all.
1427     if (Invalidate) {
1428       AA.getState().indicatePessimisticFixpoint();
1429       return AA;
1430     }
1431 
1432     {
1433       TimeTraceScope TimeScope(AA.getName() + "::initialize");
1434       ++InitializationChainLength;
1435       AA.initialize(*this);
1436       --InitializationChainLength;
1437     }
1438 
1439     // We update only AAs associated with functions in the Functions set or
1440     // call sites of them.
1441     if ((AnchorFn && !Functions.count(const_cast<Function *>(AnchorFn))) &&
1442         !Functions.count(IRP.getAssociatedFunction())) {
1443       AA.getState().indicatePessimisticFixpoint();
1444       return AA;
1445     }
1446 
1447     // If this is queried in the manifest stage, we force the AA to indicate
1448     // pessimistic fixpoint immediately.
1449     if (Phase == AttributorPhase::MANIFEST) {
1450       AA.getState().indicatePessimisticFixpoint();
1451       return AA;
1452     }
1453 
1454     // Allow seeded attributes to declare dependencies.
1455     // Remember the seeding state.
1456     if (UpdateAfterInit) {
1457       AttributorPhase OldPhase = Phase;
1458       Phase = AttributorPhase::UPDATE;
1459 
1460       updateAA(AA);
1461 
1462       Phase = OldPhase;
1463     }
1464 
1465     if (QueryingAA && AA.getState().isValidState())
1466       recordDependence(AA, const_cast<AbstractAttribute &>(*QueryingAA),
1467                        DepClass);
1468     return AA;
1469   }
1470   template <typename AAType>
1471   const AAType &getOrCreateAAFor(const IRPosition &IRP) {
1472     return getOrCreateAAFor<AAType>(IRP, /* QueryingAA */ nullptr,
1473                                     DepClassTy::NONE);
1474   }
1475 
1476   /// Return the attribute of \p AAType for \p IRP if existing and valid. This
1477   /// also allows non-AA users lookup.
1478   template <typename AAType>
1479   AAType *lookupAAFor(const IRPosition &IRP,
1480                       const AbstractAttribute *QueryingAA = nullptr,
1481                       DepClassTy DepClass = DepClassTy::OPTIONAL,
1482                       bool AllowInvalidState = false) {
1483     static_assert(std::is_base_of<AbstractAttribute, AAType>::value,
1484                   "Cannot query an attribute with a type not derived from "
1485                   "'AbstractAttribute'!");
1486     // Lookup the abstract attribute of type AAType. If found, return it after
1487     // registering a dependence of QueryingAA on the one returned attribute.
1488     AbstractAttribute *AAPtr = AAMap.lookup({&AAType::ID, IRP});
1489     if (!AAPtr)
1490       return nullptr;
1491 
1492     AAType *AA = static_cast<AAType *>(AAPtr);
1493 
1494     // Do not register a dependence on an attribute with an invalid state.
1495     if (DepClass != DepClassTy::NONE && QueryingAA &&
1496         AA->getState().isValidState())
1497       recordDependence(*AA, const_cast<AbstractAttribute &>(*QueryingAA),
1498                        DepClass);
1499 
1500     // Return nullptr if this attribute has an invalid state.
1501     if (!AllowInvalidState && !AA->getState().isValidState())
1502       return nullptr;
1503     return AA;
1504   }
1505 
1506   /// Allows a query AA to request an update if a new query was received.
1507   void registerForUpdate(AbstractAttribute &AA);
1508 
1509   /// Explicitly record a dependence from \p FromAA to \p ToAA, that is if
1510   /// \p FromAA changes \p ToAA should be updated as well.
1511   ///
1512   /// This method should be used in conjunction with the `getAAFor` method and
1513   /// with the DepClass enum passed to the method set to None. This can
1514   /// be beneficial to avoid false dependences but it requires the users of
1515   /// `getAAFor` to explicitly record true dependences through this method.
1516   /// The \p DepClass flag indicates if the dependence is striclty necessary.
1517   /// That means for required dependences, if \p FromAA changes to an invalid
1518   /// state, \p ToAA can be moved to a pessimistic fixpoint because it required
1519   /// information from \p FromAA but none are available anymore.
1520   void recordDependence(const AbstractAttribute &FromAA,
1521                         const AbstractAttribute &ToAA, DepClassTy DepClass);
1522 
1523   /// Introduce a new abstract attribute into the fixpoint analysis.
1524   ///
1525   /// Note that ownership of the attribute is given to the Attributor. It will
1526   /// invoke delete for the Attributor on destruction of the Attributor.
1527   ///
1528   /// Attributes are identified by their IR position (AAType::getIRPosition())
1529   /// and the address of their static member (see AAType::ID).
1530   template <typename AAType> AAType &registerAA(AAType &AA) {
1531     static_assert(std::is_base_of<AbstractAttribute, AAType>::value,
1532                   "Cannot register an attribute with a type not derived from "
1533                   "'AbstractAttribute'!");
1534     // Put the attribute in the lookup map structure and the container we use to
1535     // keep track of all attributes.
1536     const IRPosition &IRP = AA.getIRPosition();
1537     AbstractAttribute *&AAPtr = AAMap[{&AAType::ID, IRP}];
1538 
1539     assert(!AAPtr && "Attribute already in map!");
1540     AAPtr = &AA;
1541 
1542     // Register AA with the synthetic root only before the manifest stage.
1543     if (Phase == AttributorPhase::SEEDING || Phase == AttributorPhase::UPDATE)
1544       DG.SyntheticRoot.Deps.push_back(
1545           AADepGraphNode::DepTy(&AA, unsigned(DepClassTy::REQUIRED)));
1546 
1547     return AA;
1548   }
1549 
1550   /// Return the internal information cache.
1551   InformationCache &getInfoCache() { return InfoCache; }
1552 
1553   /// Return true if this is a module pass, false otherwise.
1554   bool isModulePass() const { return Configuration.IsModulePass; }
1555 
1556   /// Return true if we derive attributes for \p Fn
1557   bool isRunOn(Function &Fn) const {
1558     return Functions.empty() || Functions.count(&Fn);
1559   }
1560 
1561   /// Determine opportunities to derive 'default' attributes in \p F and create
1562   /// abstract attribute objects for them.
1563   ///
1564   /// \param F The function that is checked for attribute opportunities.
1565   ///
1566   /// Note that abstract attribute instances are generally created even if the
1567   /// IR already contains the information they would deduce. The most important
1568   /// reason for this is the single interface, the one of the abstract attribute
1569   /// instance, which can be queried without the need to look at the IR in
1570   /// various places.
1571   void identifyDefaultAbstractAttributes(Function &F);
1572 
1573   /// Determine whether the function \p F is IPO amendable
1574   ///
1575   /// If a function is exactly defined or it has alwaysinline attribute
1576   /// and is viable to be inlined, we say it is IPO amendable
1577   bool isFunctionIPOAmendable(const Function &F) {
1578     return F.hasExactDefinition() || InfoCache.InlineableFunctions.count(&F);
1579   }
1580 
1581   /// Mark the internal function \p F as live.
1582   ///
1583   /// This will trigger the identification and initialization of attributes for
1584   /// \p F.
1585   void markLiveInternalFunction(const Function &F) {
1586     assert(F.hasLocalLinkage() &&
1587            "Only local linkage is assumed dead initially.");
1588 
1589     if (Configuration.DefaultInitializeLiveInternals)
1590       identifyDefaultAbstractAttributes(const_cast<Function &>(F));
1591   }
1592 
1593   /// Helper function to remove callsite.
1594   void removeCallSite(CallInst *CI) {
1595     if (!CI)
1596       return;
1597 
1598     Configuration.CGUpdater.removeCallSite(*CI);
1599   }
1600 
1601   /// Record that \p U is to be replaces with \p NV after information was
1602   /// manifested. This also triggers deletion of trivially dead istructions.
1603   bool changeUseAfterManifest(Use &U, Value &NV) {
1604     Value *&V = ToBeChangedUses[&U];
1605     if (V && (V->stripPointerCasts() == NV.stripPointerCasts() ||
1606               isa_and_nonnull<UndefValue>(V)))
1607       return false;
1608     assert((!V || V == &NV || isa<UndefValue>(NV)) &&
1609            "Use was registered twice for replacement with different values!");
1610     V = &NV;
1611     return true;
1612   }
1613 
1614   /// Helper function to replace all uses associated with \p IRP with \p NV.
1615   /// Return true if there is any change. The flag \p ChangeDroppable indicates
1616   /// if dropppable uses should be changed too.
1617   bool changeAfterManifest(const IRPosition IRP, Value &NV,
1618                            bool ChangeDroppable = true) {
1619     if (IRP.getPositionKind() == IRPosition::IRP_CALL_SITE_ARGUMENT) {
1620       auto *CB = cast<CallBase>(IRP.getCtxI());
1621       return changeUseAfterManifest(
1622           CB->getArgOperandUse(IRP.getCallSiteArgNo()), NV);
1623     }
1624     Value &V = IRP.getAssociatedValue();
1625     auto &Entry = ToBeChangedValues[&V];
1626     Value *&CurNV = Entry.first;
1627     if (CurNV && (CurNV->stripPointerCasts() == NV.stripPointerCasts() ||
1628                   isa<UndefValue>(CurNV)))
1629       return false;
1630     assert((!CurNV || CurNV == &NV || isa<UndefValue>(NV)) &&
1631            "Value replacement was registered twice with different values!");
1632     CurNV = &NV;
1633     Entry.second = ChangeDroppable;
1634     return true;
1635   }
1636 
1637   /// Record that \p I is to be replaced with `unreachable` after information
1638   /// was manifested.
1639   void changeToUnreachableAfterManifest(Instruction *I) {
1640     ToBeChangedToUnreachableInsts.insert(I);
1641   }
1642 
1643   /// Record that \p II has at least one dead successor block. This information
1644   /// is used, e.g., to replace \p II with a call, after information was
1645   /// manifested.
1646   void registerInvokeWithDeadSuccessor(InvokeInst &II) {
1647     InvokeWithDeadSuccessor.insert(&II);
1648   }
1649 
1650   /// Record that \p I is deleted after information was manifested. This also
1651   /// triggers deletion of trivially dead istructions.
1652   void deleteAfterManifest(Instruction &I) { ToBeDeletedInsts.insert(&I); }
1653 
1654   /// Record that \p BB is deleted after information was manifested. This also
1655   /// triggers deletion of trivially dead istructions.
1656   void deleteAfterManifest(BasicBlock &BB) { ToBeDeletedBlocks.insert(&BB); }
1657 
1658   // Record that \p BB is added during the manifest of an AA. Added basic blocks
1659   // are preserved in the IR.
1660   void registerManifestAddedBasicBlock(BasicBlock &BB) {
1661     ManifestAddedBlocks.insert(&BB);
1662   }
1663 
1664   /// Record that \p F is deleted after information was manifested.
1665   void deleteAfterManifest(Function &F) {
1666     if (Configuration.DeleteFns)
1667       ToBeDeletedFunctions.insert(&F);
1668   }
1669 
1670   /// If \p IRP is assumed to be a constant, return it, if it is unclear yet,
1671   /// return None, otherwise return `nullptr`.
1672   Optional<Constant *> getAssumedConstant(const IRPosition &IRP,
1673                                           const AbstractAttribute &AA,
1674                                           bool &UsedAssumedInformation);
1675   Optional<Constant *> getAssumedConstant(const Value &V,
1676                                           const AbstractAttribute &AA,
1677                                           bool &UsedAssumedInformation) {
1678     return getAssumedConstant(IRPosition::value(V), AA, UsedAssumedInformation);
1679   }
1680 
1681   /// If \p V is assumed simplified, return it, if it is unclear yet,
1682   /// return None, otherwise return `nullptr`.
1683   Optional<Value *> getAssumedSimplified(const IRPosition &IRP,
1684                                          const AbstractAttribute &AA,
1685                                          bool &UsedAssumedInformation,
1686                                          AA::ValueScope S) {
1687     return getAssumedSimplified(IRP, &AA, UsedAssumedInformation, S);
1688   }
1689   Optional<Value *> getAssumedSimplified(const Value &V,
1690                                          const AbstractAttribute &AA,
1691                                          bool &UsedAssumedInformation,
1692                                          AA::ValueScope S) {
1693     return getAssumedSimplified(IRPosition::value(V), AA,
1694                                 UsedAssumedInformation, S);
1695   }
1696 
1697   /// If \p V is assumed simplified, return it, if it is unclear yet,
1698   /// return None, otherwise return `nullptr`. Same as the public version
1699   /// except that it can be used without recording dependences on any \p AA.
1700   Optional<Value *> getAssumedSimplified(const IRPosition &V,
1701                                          const AbstractAttribute *AA,
1702                                          bool &UsedAssumedInformation,
1703                                          AA::ValueScope S);
1704 
1705   /// Try to simplify \p IRP and in the scope \p S. If successful, true is
1706   /// returned and all potential values \p IRP can take are put into \p Values.
1707   /// If false is returned no other information is valid.
1708   bool getAssumedSimplifiedValues(const IRPosition &IRP,
1709                                   const AbstractAttribute *AA,
1710                                   SmallVectorImpl<AA::ValueAndContext> &Values,
1711                                   AA::ValueScope S,
1712                                   bool &UsedAssumedInformation);
1713 
1714   /// Register \p CB as a simplification callback.
1715   /// `Attributor::getAssumedSimplified` will use these callbacks before
1716   /// we it will ask `AAValueSimplify`. It is important to ensure this
1717   /// is called before `identifyDefaultAbstractAttributes`, assuming the
1718   /// latter is called at all.
1719   using SimplifictionCallbackTy = std::function<Optional<Value *>(
1720       const IRPosition &, const AbstractAttribute *, bool &)>;
1721   void registerSimplificationCallback(const IRPosition &IRP,
1722                                       const SimplifictionCallbackTy &CB) {
1723     SimplificationCallbacks[IRP].emplace_back(CB);
1724   }
1725 
1726   /// Return true if there is a simplification callback for \p IRP.
1727   bool hasSimplificationCallback(const IRPosition &IRP) {
1728     return SimplificationCallbacks.count(IRP);
1729   }
1730 
1731 private:
1732   /// The vector with all simplification callbacks registered by outside AAs.
1733   DenseMap<IRPosition, SmallVector<SimplifictionCallbackTy, 1>>
1734       SimplificationCallbacks;
1735 
1736 public:
1737   /// Translate \p V from the callee context into the call site context.
1738   Optional<Value *>
1739   translateArgumentToCallSiteContent(Optional<Value *> V, CallBase &CB,
1740                                      const AbstractAttribute &AA,
1741                                      bool &UsedAssumedInformation);
1742 
1743   /// Return true if \p AA (or its context instruction) is assumed dead.
1744   ///
1745   /// If \p LivenessAA is not provided it is queried.
1746   bool isAssumedDead(const AbstractAttribute &AA, const AAIsDead *LivenessAA,
1747                      bool &UsedAssumedInformation,
1748                      bool CheckBBLivenessOnly = false,
1749                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1750 
1751   /// Return true if \p I is assumed dead.
1752   ///
1753   /// If \p LivenessAA is not provided it is queried.
1754   bool isAssumedDead(const Instruction &I, const AbstractAttribute *QueryingAA,
1755                      const AAIsDead *LivenessAA, bool &UsedAssumedInformation,
1756                      bool CheckBBLivenessOnly = false,
1757                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1758 
1759   /// Return true if \p U is assumed dead.
1760   ///
1761   /// If \p FnLivenessAA is not provided it is queried.
1762   bool isAssumedDead(const Use &U, const AbstractAttribute *QueryingAA,
1763                      const AAIsDead *FnLivenessAA, bool &UsedAssumedInformation,
1764                      bool CheckBBLivenessOnly = false,
1765                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1766 
1767   /// Return true if \p IRP is assumed dead.
1768   ///
1769   /// If \p FnLivenessAA is not provided it is queried.
1770   bool isAssumedDead(const IRPosition &IRP, const AbstractAttribute *QueryingAA,
1771                      const AAIsDead *FnLivenessAA, bool &UsedAssumedInformation,
1772                      bool CheckBBLivenessOnly = false,
1773                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1774 
1775   /// Return true if \p BB is assumed dead.
1776   ///
1777   /// If \p LivenessAA is not provided it is queried.
1778   bool isAssumedDead(const BasicBlock &BB, const AbstractAttribute *QueryingAA,
1779                      const AAIsDead *FnLivenessAA,
1780                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1781 
1782   /// Check \p Pred on all (transitive) uses of \p V.
1783   ///
1784   /// This method will evaluate \p Pred on all (transitive) uses of the
1785   /// associated value and return true if \p Pred holds every time.
1786   /// If uses are skipped in favor of equivalent ones, e.g., if we look through
1787   /// memory, the \p EquivalentUseCB will be used to give the caller an idea
1788   /// what original used was replaced by a new one (or new ones). The visit is
1789   /// cut short if \p EquivalentUseCB returns false and the function will return
1790   /// false as well.
1791   bool checkForAllUses(function_ref<bool(const Use &, bool &)> Pred,
1792                        const AbstractAttribute &QueryingAA, const Value &V,
1793                        bool CheckBBLivenessOnly = false,
1794                        DepClassTy LivenessDepClass = DepClassTy::OPTIONAL,
1795                        bool IgnoreDroppableUses = true,
1796                        function_ref<bool(const Use &OldU, const Use &NewU)>
1797                            EquivalentUseCB = nullptr);
1798 
1799   /// Emit a remark generically.
1800   ///
1801   /// This template function can be used to generically emit a remark. The
1802   /// RemarkKind should be one of the following:
1803   ///   - OptimizationRemark to indicate a successful optimization attempt
1804   ///   - OptimizationRemarkMissed to report a failed optimization attempt
1805   ///   - OptimizationRemarkAnalysis to provide additional information about an
1806   ///     optimization attempt
1807   ///
1808   /// The remark is built using a callback function \p RemarkCB that takes a
1809   /// RemarkKind as input and returns a RemarkKind.
1810   template <typename RemarkKind, typename RemarkCallBack>
1811   void emitRemark(Instruction *I, StringRef RemarkName,
1812                   RemarkCallBack &&RemarkCB) const {
1813     if (!Configuration.OREGetter)
1814       return;
1815 
1816     Function *F = I->getFunction();
1817     auto &ORE = Configuration.OREGetter(F);
1818 
1819     if (RemarkName.startswith("OMP"))
1820       ORE.emit([&]() {
1821         return RemarkCB(RemarkKind(Configuration.PassName, RemarkName, I))
1822                << " [" << RemarkName << "]";
1823       });
1824     else
1825       ORE.emit([&]() {
1826         return RemarkCB(RemarkKind(Configuration.PassName, RemarkName, I));
1827       });
1828   }
1829 
1830   /// Emit a remark on a function.
1831   template <typename RemarkKind, typename RemarkCallBack>
1832   void emitRemark(Function *F, StringRef RemarkName,
1833                   RemarkCallBack &&RemarkCB) const {
1834     if (!Configuration.OREGetter)
1835       return;
1836 
1837     auto &ORE = Configuration.OREGetter(F);
1838 
1839     if (RemarkName.startswith("OMP"))
1840       ORE.emit([&]() {
1841         return RemarkCB(RemarkKind(Configuration.PassName, RemarkName, F))
1842                << " [" << RemarkName << "]";
1843       });
1844     else
1845       ORE.emit([&]() {
1846         return RemarkCB(RemarkKind(Configuration.PassName, RemarkName, F));
1847       });
1848   }
1849 
1850   /// Helper struct used in the communication between an abstract attribute (AA)
1851   /// that wants to change the signature of a function and the Attributor which
1852   /// applies the changes. The struct is partially initialized with the
1853   /// information from the AA (see the constructor). All other members are
1854   /// provided by the Attributor prior to invoking any callbacks.
1855   struct ArgumentReplacementInfo {
1856     /// Callee repair callback type
1857     ///
1858     /// The function repair callback is invoked once to rewire the replacement
1859     /// arguments in the body of the new function. The argument replacement info
1860     /// is passed, as build from the registerFunctionSignatureRewrite call, as
1861     /// well as the replacement function and an iteratore to the first
1862     /// replacement argument.
1863     using CalleeRepairCBTy = std::function<void(
1864         const ArgumentReplacementInfo &, Function &, Function::arg_iterator)>;
1865 
1866     /// Abstract call site (ACS) repair callback type
1867     ///
1868     /// The abstract call site repair callback is invoked once on every abstract
1869     /// call site of the replaced function (\see ReplacedFn). The callback needs
1870     /// to provide the operands for the call to the new replacement function.
1871     /// The number and type of the operands appended to the provided vector
1872     /// (second argument) is defined by the number and types determined through
1873     /// the replacement type vector (\see ReplacementTypes). The first argument
1874     /// is the ArgumentReplacementInfo object registered with the Attributor
1875     /// through the registerFunctionSignatureRewrite call.
1876     using ACSRepairCBTy =
1877         std::function<void(const ArgumentReplacementInfo &, AbstractCallSite,
1878                            SmallVectorImpl<Value *> &)>;
1879 
1880     /// Simple getters, see the corresponding members for details.
1881     ///{
1882 
1883     Attributor &getAttributor() const { return A; }
1884     const Function &getReplacedFn() const { return ReplacedFn; }
1885     const Argument &getReplacedArg() const { return ReplacedArg; }
1886     unsigned getNumReplacementArgs() const { return ReplacementTypes.size(); }
1887     const SmallVectorImpl<Type *> &getReplacementTypes() const {
1888       return ReplacementTypes;
1889     }
1890 
1891     ///}
1892 
1893   private:
1894     /// Constructor that takes the argument to be replaced, the types of
1895     /// the replacement arguments, as well as callbacks to repair the call sites
1896     /// and new function after the replacement happened.
1897     ArgumentReplacementInfo(Attributor &A, Argument &Arg,
1898                             ArrayRef<Type *> ReplacementTypes,
1899                             CalleeRepairCBTy &&CalleeRepairCB,
1900                             ACSRepairCBTy &&ACSRepairCB)
1901         : A(A), ReplacedFn(*Arg.getParent()), ReplacedArg(Arg),
1902           ReplacementTypes(ReplacementTypes.begin(), ReplacementTypes.end()),
1903           CalleeRepairCB(std::move(CalleeRepairCB)),
1904           ACSRepairCB(std::move(ACSRepairCB)) {}
1905 
1906     /// Reference to the attributor to allow access from the callbacks.
1907     Attributor &A;
1908 
1909     /// The "old" function replaced by ReplacementFn.
1910     const Function &ReplacedFn;
1911 
1912     /// The "old" argument replaced by new ones defined via ReplacementTypes.
1913     const Argument &ReplacedArg;
1914 
1915     /// The types of the arguments replacing ReplacedArg.
1916     const SmallVector<Type *, 8> ReplacementTypes;
1917 
1918     /// Callee repair callback, see CalleeRepairCBTy.
1919     const CalleeRepairCBTy CalleeRepairCB;
1920 
1921     /// Abstract call site (ACS) repair callback, see ACSRepairCBTy.
1922     const ACSRepairCBTy ACSRepairCB;
1923 
1924     /// Allow access to the private members from the Attributor.
1925     friend struct Attributor;
1926   };
1927 
1928   /// Check if we can rewrite a function signature.
1929   ///
1930   /// The argument \p Arg is replaced with new ones defined by the number,
1931   /// order, and types in \p ReplacementTypes.
1932   ///
1933   /// \returns True, if the replacement can be registered, via
1934   /// registerFunctionSignatureRewrite, false otherwise.
1935   bool isValidFunctionSignatureRewrite(Argument &Arg,
1936                                        ArrayRef<Type *> ReplacementTypes);
1937 
1938   /// Register a rewrite for a function signature.
1939   ///
1940   /// The argument \p Arg is replaced with new ones defined by the number,
1941   /// order, and types in \p ReplacementTypes. The rewiring at the call sites is
1942   /// done through \p ACSRepairCB and at the callee site through
1943   /// \p CalleeRepairCB.
1944   ///
1945   /// \returns True, if the replacement was registered, false otherwise.
1946   bool registerFunctionSignatureRewrite(
1947       Argument &Arg, ArrayRef<Type *> ReplacementTypes,
1948       ArgumentReplacementInfo::CalleeRepairCBTy &&CalleeRepairCB,
1949       ArgumentReplacementInfo::ACSRepairCBTy &&ACSRepairCB);
1950 
1951   /// Check \p Pred on all function call sites.
1952   ///
1953   /// This method will evaluate \p Pred on call sites and return
1954   /// true if \p Pred holds in every call sites. However, this is only possible
1955   /// all call sites are known, hence the function has internal linkage.
1956   /// If true is returned, \p UsedAssumedInformation is set if assumed
1957   /// information was used to skip or simplify potential call sites.
1958   bool checkForAllCallSites(function_ref<bool(AbstractCallSite)> Pred,
1959                             const AbstractAttribute &QueryingAA,
1960                             bool RequireAllCallSites,
1961                             bool &UsedAssumedInformation);
1962 
1963   /// Check \p Pred on all call sites of \p Fn.
1964   ///
1965   /// This method will evaluate \p Pred on call sites and return
1966   /// true if \p Pred holds in every call sites. However, this is only possible
1967   /// all call sites are known, hence the function has internal linkage.
1968   /// If true is returned, \p UsedAssumedInformation is set if assumed
1969   /// information was used to skip or simplify potential call sites.
1970   bool checkForAllCallSites(function_ref<bool(AbstractCallSite)> Pred,
1971                             const Function &Fn, bool RequireAllCallSites,
1972                             const AbstractAttribute *QueryingAA,
1973                             bool &UsedAssumedInformation);
1974 
1975   /// Check \p Pred on all values potentially returned by \p F.
1976   ///
1977   /// This method will evaluate \p Pred on all values potentially returned by
1978   /// the function associated with \p QueryingAA. The returned values are
1979   /// matched with their respective return instructions. Returns true if \p Pred
1980   /// holds on all of them.
1981   bool checkForAllReturnedValuesAndReturnInsts(
1982       function_ref<bool(Value &, const SmallSetVector<ReturnInst *, 4> &)> Pred,
1983       const AbstractAttribute &QueryingAA);
1984 
1985   /// Check \p Pred on all values potentially returned by the function
1986   /// associated with \p QueryingAA.
1987   ///
1988   /// This is the context insensitive version of the method above.
1989   bool checkForAllReturnedValues(function_ref<bool(Value &)> Pred,
1990                                  const AbstractAttribute &QueryingAA);
1991 
1992   /// Check \p Pred on all instructions in \p Fn with an opcode present in
1993   /// \p Opcodes.
1994   ///
1995   /// This method will evaluate \p Pred on all instructions with an opcode
1996   /// present in \p Opcode and return true if \p Pred holds on all of them.
1997   bool checkForAllInstructions(function_ref<bool(Instruction &)> Pred,
1998                                const Function *Fn,
1999                                const AbstractAttribute &QueryingAA,
2000                                const ArrayRef<unsigned> &Opcodes,
2001                                bool &UsedAssumedInformation,
2002                                bool CheckBBLivenessOnly = false,
2003                                bool CheckPotentiallyDead = false);
2004 
2005   /// Check \p Pred on all instructions with an opcode present in \p Opcodes.
2006   ///
2007   /// This method will evaluate \p Pred on all instructions with an opcode
2008   /// present in \p Opcode and return true if \p Pred holds on all of them.
2009   bool checkForAllInstructions(function_ref<bool(Instruction &)> Pred,
2010                                const AbstractAttribute &QueryingAA,
2011                                const ArrayRef<unsigned> &Opcodes,
2012                                bool &UsedAssumedInformation,
2013                                bool CheckBBLivenessOnly = false,
2014                                bool CheckPotentiallyDead = false);
2015 
2016   /// Check \p Pred on all call-like instructions (=CallBased derived).
2017   ///
2018   /// See checkForAllCallLikeInstructions(...) for more information.
2019   bool checkForAllCallLikeInstructions(function_ref<bool(Instruction &)> Pred,
2020                                        const AbstractAttribute &QueryingAA,
2021                                        bool &UsedAssumedInformation,
2022                                        bool CheckBBLivenessOnly = false,
2023                                        bool CheckPotentiallyDead = false) {
2024     return checkForAllInstructions(
2025         Pred, QueryingAA,
2026         {(unsigned)Instruction::Invoke, (unsigned)Instruction::CallBr,
2027          (unsigned)Instruction::Call},
2028         UsedAssumedInformation, CheckBBLivenessOnly, CheckPotentiallyDead);
2029   }
2030 
2031   /// Check \p Pred on all Read/Write instructions.
2032   ///
2033   /// This method will evaluate \p Pred on all instructions that read or write
2034   /// to memory present in the information cache and return true if \p Pred
2035   /// holds on all of them.
2036   bool checkForAllReadWriteInstructions(function_ref<bool(Instruction &)> Pred,
2037                                         AbstractAttribute &QueryingAA,
2038                                         bool &UsedAssumedInformation);
2039 
2040   /// Create a shallow wrapper for \p F such that \p F has internal linkage
2041   /// afterwards. It also sets the original \p F 's name to anonymous
2042   ///
2043   /// A wrapper is a function with the same type (and attributes) as \p F
2044   /// that will only call \p F and return the result, if any.
2045   ///
2046   /// Assuming the declaration of looks like:
2047   ///   rty F(aty0 arg0, ..., atyN argN);
2048   ///
2049   /// The wrapper will then look as follows:
2050   ///   rty wrapper(aty0 arg0, ..., atyN argN) {
2051   ///     return F(arg0, ..., argN);
2052   ///   }
2053   ///
2054   static void createShallowWrapper(Function &F);
2055 
2056   /// Returns true if the function \p F can be internalized. i.e. it has a
2057   /// compatible linkage.
2058   static bool isInternalizable(Function &F);
2059 
2060   /// Make another copy of the function \p F such that the copied version has
2061   /// internal linkage afterwards and can be analysed. Then we replace all uses
2062   /// of the original function to the copied one
2063   ///
2064   /// Only non-locally linked functions that have `linkonce_odr` or `weak_odr`
2065   /// linkage can be internalized because these linkages guarantee that other
2066   /// definitions with the same name have the same semantics as this one.
2067   ///
2068   /// This will only be run if the `attributor-allow-deep-wrappers` option is
2069   /// set, or if the function is called with \p Force set to true.
2070   ///
2071   /// If the function \p F failed to be internalized the return value will be a
2072   /// null pointer.
2073   static Function *internalizeFunction(Function &F, bool Force = false);
2074 
2075   /// Make copies of each function in the set \p FnSet such that the copied
2076   /// version has internal linkage afterwards and can be analysed. Then we
2077   /// replace all uses of the original function to the copied one. The map
2078   /// \p FnMap contains a mapping of functions to their internalized versions.
2079   ///
2080   /// Only non-locally linked functions that have `linkonce_odr` or `weak_odr`
2081   /// linkage can be internalized because these linkages guarantee that other
2082   /// definitions with the same name have the same semantics as this one.
2083   ///
2084   /// This version will internalize all the functions in the set \p FnSet at
2085   /// once and then replace the uses. This prevents internalized functions being
2086   /// called by external functions when there is an internalized version in the
2087   /// module.
2088   static bool internalizeFunctions(SmallPtrSetImpl<Function *> &FnSet,
2089                                    DenseMap<Function *, Function *> &FnMap);
2090 
2091   /// Return the data layout associated with the anchor scope.
2092   const DataLayout &getDataLayout() const { return InfoCache.DL; }
2093 
2094   /// The allocator used to allocate memory, e.g. for `AbstractAttribute`s.
2095   BumpPtrAllocator &Allocator;
2096 
2097 private:
2098   /// This method will do fixpoint iteration until fixpoint or the
2099   /// maximum iteration count is reached.
2100   ///
2101   /// If the maximum iteration count is reached, This method will
2102   /// indicate pessimistic fixpoint on attributes that transitively depend
2103   /// on attributes that were scheduled for an update.
2104   void runTillFixpoint();
2105 
2106   /// Gets called after scheduling, manifests attributes to the LLVM IR.
2107   ChangeStatus manifestAttributes();
2108 
2109   /// Gets called after attributes have been manifested, cleans up the IR.
2110   /// Deletes dead functions, blocks and instructions.
2111   /// Rewrites function signitures and updates the call graph.
2112   ChangeStatus cleanupIR();
2113 
2114   /// Identify internal functions that are effectively dead, thus not reachable
2115   /// from a live entry point. The functions are added to ToBeDeletedFunctions.
2116   void identifyDeadInternalFunctions();
2117 
2118   /// Run `::update` on \p AA and track the dependences queried while doing so.
2119   /// Also adjust the state if we know further updates are not necessary.
2120   ChangeStatus updateAA(AbstractAttribute &AA);
2121 
2122   /// Remember the dependences on the top of the dependence stack such that they
2123   /// may trigger further updates. (\see DependenceStack)
2124   void rememberDependences();
2125 
2126   /// Determine if CallBase context in \p IRP should be propagated.
2127   bool shouldPropagateCallBaseContext(const IRPosition &IRP);
2128 
2129   /// Apply all requested function signature rewrites
2130   /// (\see registerFunctionSignatureRewrite) and return Changed if the module
2131   /// was altered.
2132   ChangeStatus
2133   rewriteFunctionSignatures(SmallSetVector<Function *, 8> &ModifiedFns);
2134 
2135   /// Check if the Attribute \p AA should be seeded.
2136   /// See getOrCreateAAFor.
2137   bool shouldSeedAttribute(AbstractAttribute &AA);
2138 
2139   /// A nested map to lookup abstract attributes based on the argument position
2140   /// on the outer level, and the addresses of the static member (AAType::ID) on
2141   /// the inner level.
2142   ///{
2143   using AAMapKeyTy = std::pair<const char *, IRPosition>;
2144   DenseMap<AAMapKeyTy, AbstractAttribute *> AAMap;
2145   ///}
2146 
2147   /// Map to remember all requested signature changes (= argument replacements).
2148   DenseMap<Function *, SmallVector<std::unique_ptr<ArgumentReplacementInfo>, 8>>
2149       ArgumentReplacementMap;
2150 
2151   /// The set of functions we are deriving attributes for.
2152   SetVector<Function *> &Functions;
2153 
2154   /// The information cache that holds pre-processed (LLVM-IR) information.
2155   InformationCache &InfoCache;
2156 
2157   /// Abstract Attribute dependency graph
2158   AADepGraph DG;
2159 
2160   /// Set of functions for which we modified the content such that it might
2161   /// impact the call graph.
2162   SmallSetVector<Function *, 8> CGModifiedFunctions;
2163 
2164   /// Information about a dependence. If FromAA is changed ToAA needs to be
2165   /// updated as well.
2166   struct DepInfo {
2167     const AbstractAttribute *FromAA;
2168     const AbstractAttribute *ToAA;
2169     DepClassTy DepClass;
2170   };
2171 
2172   /// The dependence stack is used to track dependences during an
2173   /// `AbstractAttribute::update` call. As `AbstractAttribute::update` can be
2174   /// recursive we might have multiple vectors of dependences in here. The stack
2175   /// size, should be adjusted according to the expected recursion depth and the
2176   /// inner dependence vector size to the expected number of dependences per
2177   /// abstract attribute. Since the inner vectors are actually allocated on the
2178   /// stack we can be generous with their size.
2179   using DependenceVector = SmallVector<DepInfo, 8>;
2180   SmallVector<DependenceVector *, 16> DependenceStack;
2181 
2182   /// A set to remember the functions we already assume to be live and visited.
2183   DenseSet<const Function *> VisitedFunctions;
2184 
2185   /// Uses we replace with a new value after manifest is done. We will remove
2186   /// then trivially dead instructions as well.
2187   SmallMapVector<Use *, Value *, 32> ToBeChangedUses;
2188 
2189   /// Values we replace with a new value after manifest is done. We will remove
2190   /// then trivially dead instructions as well.
2191   SmallMapVector<Value *, std::pair<Value *, bool>, 32> ToBeChangedValues;
2192 
2193   /// Instructions we replace with `unreachable` insts after manifest is done.
2194   SmallSetVector<WeakVH, 16> ToBeChangedToUnreachableInsts;
2195 
2196   /// Invoke instructions with at least a single dead successor block.
2197   SmallSetVector<WeakVH, 16> InvokeWithDeadSuccessor;
2198 
2199   /// A flag that indicates which stage of the process we are in. Initially, the
2200   /// phase is SEEDING. Phase is changed in `Attributor::run()`
2201   enum class AttributorPhase {
2202     SEEDING,
2203     UPDATE,
2204     MANIFEST,
2205     CLEANUP,
2206   } Phase = AttributorPhase::SEEDING;
2207 
2208   /// The current initialization chain length. Tracked to avoid stack overflows.
2209   unsigned InitializationChainLength = 0;
2210 
2211   /// Functions, blocks, and instructions we delete after manifest is done.
2212   ///
2213   ///{
2214   SmallPtrSet<BasicBlock *, 8> ManifestAddedBlocks;
2215   SmallSetVector<Function *, 8> ToBeDeletedFunctions;
2216   SmallSetVector<BasicBlock *, 8> ToBeDeletedBlocks;
2217   SmallSetVector<WeakVH, 8> ToBeDeletedInsts;
2218   ///}
2219 
2220   /// Container with all the query AAs that requested an update via
2221   /// registerForUpdate.
2222   SmallSetVector<AbstractAttribute *, 16> QueryAAsAwaitingUpdate;
2223 
2224   /// User provided configuration for this Attributor instance.
2225   const AttributorConfig Configuration;
2226 
2227   friend AADepGraph;
2228   friend AttributorCallGraph;
2229 };
2230 
2231 /// An interface to query the internal state of an abstract attribute.
2232 ///
2233 /// The abstract state is a minimal interface that allows the Attributor to
2234 /// communicate with the abstract attributes about their internal state without
2235 /// enforcing or exposing implementation details, e.g., the (existence of an)
2236 /// underlying lattice.
2237 ///
2238 /// It is sufficient to be able to query if a state is (1) valid or invalid, (2)
2239 /// at a fixpoint, and to indicate to the state that (3) an optimistic fixpoint
2240 /// was reached or (4) a pessimistic fixpoint was enforced.
2241 ///
2242 /// All methods need to be implemented by the subclass. For the common use case,
2243 /// a single boolean state or a bit-encoded state, the BooleanState and
2244 /// {Inc,Dec,Bit}IntegerState classes are already provided. An abstract
2245 /// attribute can inherit from them to get the abstract state interface and
2246 /// additional methods to directly modify the state based if needed. See the
2247 /// class comments for help.
2248 struct AbstractState {
2249   virtual ~AbstractState() = default;
2250 
2251   /// Return if this abstract state is in a valid state. If false, no
2252   /// information provided should be used.
2253   virtual bool isValidState() const = 0;
2254 
2255   /// Return if this abstract state is fixed, thus does not need to be updated
2256   /// if information changes as it cannot change itself.
2257   virtual bool isAtFixpoint() const = 0;
2258 
2259   /// Indicate that the abstract state should converge to the optimistic state.
2260   ///
2261   /// This will usually make the optimistically assumed state the known to be
2262   /// true state.
2263   ///
2264   /// \returns ChangeStatus::UNCHANGED as the assumed value should not change.
2265   virtual ChangeStatus indicateOptimisticFixpoint() = 0;
2266 
2267   /// Indicate that the abstract state should converge to the pessimistic state.
2268   ///
2269   /// This will usually revert the optimistically assumed state to the known to
2270   /// be true state.
2271   ///
2272   /// \returns ChangeStatus::CHANGED as the assumed value may change.
2273   virtual ChangeStatus indicatePessimisticFixpoint() = 0;
2274 };
2275 
2276 /// Simple state with integers encoding.
2277 ///
2278 /// The interface ensures that the assumed bits are always a subset of the known
2279 /// bits. Users can only add known bits and, except through adding known bits,
2280 /// they can only remove assumed bits. This should guarantee monotoniticy and
2281 /// thereby the existence of a fixpoint (if used corretly). The fixpoint is
2282 /// reached when the assumed and known state/bits are equal. Users can
2283 /// force/inidicate a fixpoint. If an optimistic one is indicated, the known
2284 /// state will catch up with the assumed one, for a pessimistic fixpoint it is
2285 /// the other way around.
2286 template <typename base_ty, base_ty BestState, base_ty WorstState>
2287 struct IntegerStateBase : public AbstractState {
2288   using base_t = base_ty;
2289 
2290   IntegerStateBase() = default;
2291   IntegerStateBase(base_t Assumed) : Assumed(Assumed) {}
2292 
2293   /// Return the best possible representable state.
2294   static constexpr base_t getBestState() { return BestState; }
2295   static constexpr base_t getBestState(const IntegerStateBase &) {
2296     return getBestState();
2297   }
2298 
2299   /// Return the worst possible representable state.
2300   static constexpr base_t getWorstState() { return WorstState; }
2301   static constexpr base_t getWorstState(const IntegerStateBase &) {
2302     return getWorstState();
2303   }
2304 
2305   /// See AbstractState::isValidState()
2306   /// NOTE: For now we simply pretend that the worst possible state is invalid.
2307   bool isValidState() const override { return Assumed != getWorstState(); }
2308 
2309   /// See AbstractState::isAtFixpoint()
2310   bool isAtFixpoint() const override { return Assumed == Known; }
2311 
2312   /// See AbstractState::indicateOptimisticFixpoint(...)
2313   ChangeStatus indicateOptimisticFixpoint() override {
2314     Known = Assumed;
2315     return ChangeStatus::UNCHANGED;
2316   }
2317 
2318   /// See AbstractState::indicatePessimisticFixpoint(...)
2319   ChangeStatus indicatePessimisticFixpoint() override {
2320     Assumed = Known;
2321     return ChangeStatus::CHANGED;
2322   }
2323 
2324   /// Return the known state encoding
2325   base_t getKnown() const { return Known; }
2326 
2327   /// Return the assumed state encoding.
2328   base_t getAssumed() const { return Assumed; }
2329 
2330   /// Equality for IntegerStateBase.
2331   bool
2332   operator==(const IntegerStateBase<base_t, BestState, WorstState> &R) const {
2333     return this->getAssumed() == R.getAssumed() &&
2334            this->getKnown() == R.getKnown();
2335   }
2336 
2337   /// Inequality for IntegerStateBase.
2338   bool
2339   operator!=(const IntegerStateBase<base_t, BestState, WorstState> &R) const {
2340     return !(*this == R);
2341   }
2342 
2343   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2344   /// intended that only information assumed in both states will be assumed in
2345   /// this one afterwards.
2346   void operator^=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2347     handleNewAssumedValue(R.getAssumed());
2348   }
2349 
2350   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2351   /// intended that information known in either state will be known in
2352   /// this one afterwards.
2353   void operator+=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2354     handleNewKnownValue(R.getKnown());
2355   }
2356 
2357   void operator|=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2358     joinOR(R.getAssumed(), R.getKnown());
2359   }
2360 
2361   void operator&=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2362     joinAND(R.getAssumed(), R.getKnown());
2363   }
2364 
2365 protected:
2366   /// Handle a new assumed value \p Value. Subtype dependent.
2367   virtual void handleNewAssumedValue(base_t Value) = 0;
2368 
2369   /// Handle a new known value \p Value. Subtype dependent.
2370   virtual void handleNewKnownValue(base_t Value) = 0;
2371 
2372   /// Handle a  value \p Value. Subtype dependent.
2373   virtual void joinOR(base_t AssumedValue, base_t KnownValue) = 0;
2374 
2375   /// Handle a new assumed value \p Value. Subtype dependent.
2376   virtual void joinAND(base_t AssumedValue, base_t KnownValue) = 0;
2377 
2378   /// The known state encoding in an integer of type base_t.
2379   base_t Known = getWorstState();
2380 
2381   /// The assumed state encoding in an integer of type base_t.
2382   base_t Assumed = getBestState();
2383 };
2384 
2385 /// Specialization of the integer state for a bit-wise encoding.
2386 template <typename base_ty = uint32_t, base_ty BestState = ~base_ty(0),
2387           base_ty WorstState = 0>
2388 struct BitIntegerState
2389     : public IntegerStateBase<base_ty, BestState, WorstState> {
2390   using base_t = base_ty;
2391 
2392   /// Return true if the bits set in \p BitsEncoding are "known bits".
2393   bool isKnown(base_t BitsEncoding) const {
2394     return (this->Known & BitsEncoding) == BitsEncoding;
2395   }
2396 
2397   /// Return true if the bits set in \p BitsEncoding are "assumed bits".
2398   bool isAssumed(base_t BitsEncoding) const {
2399     return (this->Assumed & BitsEncoding) == BitsEncoding;
2400   }
2401 
2402   /// Add the bits in \p BitsEncoding to the "known bits".
2403   BitIntegerState &addKnownBits(base_t Bits) {
2404     // Make sure we never miss any "known bits".
2405     this->Assumed |= Bits;
2406     this->Known |= Bits;
2407     return *this;
2408   }
2409 
2410   /// Remove the bits in \p BitsEncoding from the "assumed bits" if not known.
2411   BitIntegerState &removeAssumedBits(base_t BitsEncoding) {
2412     return intersectAssumedBits(~BitsEncoding);
2413   }
2414 
2415   /// Remove the bits in \p BitsEncoding from the "known bits".
2416   BitIntegerState &removeKnownBits(base_t BitsEncoding) {
2417     this->Known = (this->Known & ~BitsEncoding);
2418     return *this;
2419   }
2420 
2421   /// Keep only "assumed bits" also set in \p BitsEncoding but all known ones.
2422   BitIntegerState &intersectAssumedBits(base_t BitsEncoding) {
2423     // Make sure we never loose any "known bits".
2424     this->Assumed = (this->Assumed & BitsEncoding) | this->Known;
2425     return *this;
2426   }
2427 
2428 private:
2429   void handleNewAssumedValue(base_t Value) override {
2430     intersectAssumedBits(Value);
2431   }
2432   void handleNewKnownValue(base_t Value) override { addKnownBits(Value); }
2433   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2434     this->Known |= KnownValue;
2435     this->Assumed |= AssumedValue;
2436   }
2437   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2438     this->Known &= KnownValue;
2439     this->Assumed &= AssumedValue;
2440   }
2441 };
2442 
2443 /// Specialization of the integer state for an increasing value, hence ~0u is
2444 /// the best state and 0 the worst.
2445 template <typename base_ty = uint32_t, base_ty BestState = ~base_ty(0),
2446           base_ty WorstState = 0>
2447 struct IncIntegerState
2448     : public IntegerStateBase<base_ty, BestState, WorstState> {
2449   using super = IntegerStateBase<base_ty, BestState, WorstState>;
2450   using base_t = base_ty;
2451 
2452   IncIntegerState() : super() {}
2453   IncIntegerState(base_t Assumed) : super(Assumed) {}
2454 
2455   /// Return the best possible representable state.
2456   static constexpr base_t getBestState() { return BestState; }
2457   static constexpr base_t
2458   getBestState(const IncIntegerState<base_ty, BestState, WorstState> &) {
2459     return getBestState();
2460   }
2461 
2462   /// Take minimum of assumed and \p Value.
2463   IncIntegerState &takeAssumedMinimum(base_t Value) {
2464     // Make sure we never loose "known value".
2465     this->Assumed = std::max(std::min(this->Assumed, Value), this->Known);
2466     return *this;
2467   }
2468 
2469   /// Take maximum of known and \p Value.
2470   IncIntegerState &takeKnownMaximum(base_t Value) {
2471     // Make sure we never loose "known value".
2472     this->Assumed = std::max(Value, this->Assumed);
2473     this->Known = std::max(Value, this->Known);
2474     return *this;
2475   }
2476 
2477 private:
2478   void handleNewAssumedValue(base_t Value) override {
2479     takeAssumedMinimum(Value);
2480   }
2481   void handleNewKnownValue(base_t Value) override { takeKnownMaximum(Value); }
2482   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2483     this->Known = std::max(this->Known, KnownValue);
2484     this->Assumed = std::max(this->Assumed, AssumedValue);
2485   }
2486   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2487     this->Known = std::min(this->Known, KnownValue);
2488     this->Assumed = std::min(this->Assumed, AssumedValue);
2489   }
2490 };
2491 
2492 /// Specialization of the integer state for a decreasing value, hence 0 is the
2493 /// best state and ~0u the worst.
2494 template <typename base_ty = uint32_t>
2495 struct DecIntegerState : public IntegerStateBase<base_ty, 0, ~base_ty(0)> {
2496   using base_t = base_ty;
2497 
2498   /// Take maximum of assumed and \p Value.
2499   DecIntegerState &takeAssumedMaximum(base_t Value) {
2500     // Make sure we never loose "known value".
2501     this->Assumed = std::min(std::max(this->Assumed, Value), this->Known);
2502     return *this;
2503   }
2504 
2505   /// Take minimum of known and \p Value.
2506   DecIntegerState &takeKnownMinimum(base_t Value) {
2507     // Make sure we never loose "known value".
2508     this->Assumed = std::min(Value, this->Assumed);
2509     this->Known = std::min(Value, this->Known);
2510     return *this;
2511   }
2512 
2513 private:
2514   void handleNewAssumedValue(base_t Value) override {
2515     takeAssumedMaximum(Value);
2516   }
2517   void handleNewKnownValue(base_t Value) override { takeKnownMinimum(Value); }
2518   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2519     this->Assumed = std::min(this->Assumed, KnownValue);
2520     this->Assumed = std::min(this->Assumed, AssumedValue);
2521   }
2522   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2523     this->Assumed = std::max(this->Assumed, KnownValue);
2524     this->Assumed = std::max(this->Assumed, AssumedValue);
2525   }
2526 };
2527 
2528 /// Simple wrapper for a single bit (boolean) state.
2529 struct BooleanState : public IntegerStateBase<bool, true, false> {
2530   using super = IntegerStateBase<bool, true, false>;
2531   using base_t = IntegerStateBase::base_t;
2532 
2533   BooleanState() = default;
2534   BooleanState(base_t Assumed) : super(Assumed) {}
2535 
2536   /// Set the assumed value to \p Value but never below the known one.
2537   void setAssumed(bool Value) { Assumed &= (Known | Value); }
2538 
2539   /// Set the known and asssumed value to \p Value.
2540   void setKnown(bool Value) {
2541     Known |= Value;
2542     Assumed |= Value;
2543   }
2544 
2545   /// Return true if the state is assumed to hold.
2546   bool isAssumed() const { return getAssumed(); }
2547 
2548   /// Return true if the state is known to hold.
2549   bool isKnown() const { return getKnown(); }
2550 
2551 private:
2552   void handleNewAssumedValue(base_t Value) override {
2553     if (!Value)
2554       Assumed = Known;
2555   }
2556   void handleNewKnownValue(base_t Value) override {
2557     if (Value)
2558       Known = (Assumed = Value);
2559   }
2560   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2561     Known |= KnownValue;
2562     Assumed |= AssumedValue;
2563   }
2564   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2565     Known &= KnownValue;
2566     Assumed &= AssumedValue;
2567   }
2568 };
2569 
2570 /// State for an integer range.
2571 struct IntegerRangeState : public AbstractState {
2572 
2573   /// Bitwidth of the associated value.
2574   uint32_t BitWidth;
2575 
2576   /// State representing assumed range, initially set to empty.
2577   ConstantRange Assumed;
2578 
2579   /// State representing known range, initially set to [-inf, inf].
2580   ConstantRange Known;
2581 
2582   IntegerRangeState(uint32_t BitWidth)
2583       : BitWidth(BitWidth), Assumed(ConstantRange::getEmpty(BitWidth)),
2584         Known(ConstantRange::getFull(BitWidth)) {}
2585 
2586   IntegerRangeState(const ConstantRange &CR)
2587       : BitWidth(CR.getBitWidth()), Assumed(CR),
2588         Known(getWorstState(CR.getBitWidth())) {}
2589 
2590   /// Return the worst possible representable state.
2591   static ConstantRange getWorstState(uint32_t BitWidth) {
2592     return ConstantRange::getFull(BitWidth);
2593   }
2594 
2595   /// Return the best possible representable state.
2596   static ConstantRange getBestState(uint32_t BitWidth) {
2597     return ConstantRange::getEmpty(BitWidth);
2598   }
2599   static ConstantRange getBestState(const IntegerRangeState &IRS) {
2600     return getBestState(IRS.getBitWidth());
2601   }
2602 
2603   /// Return associated values' bit width.
2604   uint32_t getBitWidth() const { return BitWidth; }
2605 
2606   /// See AbstractState::isValidState()
2607   bool isValidState() const override {
2608     return BitWidth > 0 && !Assumed.isFullSet();
2609   }
2610 
2611   /// See AbstractState::isAtFixpoint()
2612   bool isAtFixpoint() const override { return Assumed == Known; }
2613 
2614   /// See AbstractState::indicateOptimisticFixpoint(...)
2615   ChangeStatus indicateOptimisticFixpoint() override {
2616     Known = Assumed;
2617     return ChangeStatus::CHANGED;
2618   }
2619 
2620   /// See AbstractState::indicatePessimisticFixpoint(...)
2621   ChangeStatus indicatePessimisticFixpoint() override {
2622     Assumed = Known;
2623     return ChangeStatus::CHANGED;
2624   }
2625 
2626   /// Return the known state encoding
2627   ConstantRange getKnown() const { return Known; }
2628 
2629   /// Return the assumed state encoding.
2630   ConstantRange getAssumed() const { return Assumed; }
2631 
2632   /// Unite assumed range with the passed state.
2633   void unionAssumed(const ConstantRange &R) {
2634     // Don't loose a known range.
2635     Assumed = Assumed.unionWith(R).intersectWith(Known);
2636   }
2637 
2638   /// See IntegerRangeState::unionAssumed(..).
2639   void unionAssumed(const IntegerRangeState &R) {
2640     unionAssumed(R.getAssumed());
2641   }
2642 
2643   /// Intersect known range with the passed state.
2644   void intersectKnown(const ConstantRange &R) {
2645     Assumed = Assumed.intersectWith(R);
2646     Known = Known.intersectWith(R);
2647   }
2648 
2649   /// See IntegerRangeState::intersectKnown(..).
2650   void intersectKnown(const IntegerRangeState &R) {
2651     intersectKnown(R.getKnown());
2652   }
2653 
2654   /// Equality for IntegerRangeState.
2655   bool operator==(const IntegerRangeState &R) const {
2656     return getAssumed() == R.getAssumed() && getKnown() == R.getKnown();
2657   }
2658 
2659   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2660   /// intended that only information assumed in both states will be assumed in
2661   /// this one afterwards.
2662   IntegerRangeState operator^=(const IntegerRangeState &R) {
2663     // NOTE: `^=` operator seems like `intersect` but in this case, we need to
2664     // take `union`.
2665     unionAssumed(R);
2666     return *this;
2667   }
2668 
2669   IntegerRangeState operator&=(const IntegerRangeState &R) {
2670     // NOTE: `&=` operator seems like `intersect` but in this case, we need to
2671     // take `union`.
2672     Known = Known.unionWith(R.getKnown());
2673     Assumed = Assumed.unionWith(R.getAssumed());
2674     return *this;
2675   }
2676 };
2677 
2678 /// Simple state for a set.
2679 ///
2680 /// This represents a state containing a set of values. The interface supports
2681 /// modelling sets that contain all possible elements. The state's internal
2682 /// value is modified using union or intersection operations.
2683 template <typename BaseTy> struct SetState : public AbstractState {
2684   /// A wrapper around a set that has semantics for handling unions and
2685   /// intersections with a "universal" set that contains all elements.
2686   struct SetContents {
2687     /// Creates a universal set with no concrete elements or an empty set.
2688     SetContents(bool Universal) : Universal(Universal) {}
2689 
2690     /// Creates a non-universal set with concrete values.
2691     SetContents(const DenseSet<BaseTy> &Assumptions)
2692         : Universal(false), Set(Assumptions) {}
2693 
2694     SetContents(bool Universal, const DenseSet<BaseTy> &Assumptions)
2695         : Universal(Universal), Set(Assumptions) {}
2696 
2697     const DenseSet<BaseTy> &getSet() const { return Set; }
2698 
2699     bool isUniversal() const { return Universal; }
2700 
2701     bool empty() const { return Set.empty() && !Universal; }
2702 
2703     /// Finds A := A ^ B where A or B could be the "Universal" set which
2704     /// contains every possible attribute. Returns true if changes were made.
2705     bool getIntersection(const SetContents &RHS) {
2706       bool IsUniversal = Universal;
2707       unsigned Size = Set.size();
2708 
2709       // A := A ^ U = A
2710       if (RHS.isUniversal())
2711         return false;
2712 
2713       // A := U ^ B = B
2714       if (Universal)
2715         Set = RHS.getSet();
2716       else
2717         set_intersect(Set, RHS.getSet());
2718 
2719       Universal &= RHS.isUniversal();
2720       return IsUniversal != Universal || Size != Set.size();
2721     }
2722 
2723     /// Finds A := A u B where A or B could be the "Universal" set which
2724     /// contains every possible attribute. returns true if changes were made.
2725     bool getUnion(const SetContents &RHS) {
2726       bool IsUniversal = Universal;
2727       unsigned Size = Set.size();
2728 
2729       // A := A u U = U = U u B
2730       if (!RHS.isUniversal() && !Universal)
2731         set_union(Set, RHS.getSet());
2732 
2733       Universal |= RHS.isUniversal();
2734       return IsUniversal != Universal || Size != Set.size();
2735     }
2736 
2737   private:
2738     /// Indicates if this set is "universal", containing every possible element.
2739     bool Universal;
2740 
2741     /// The set of currently active assumptions.
2742     DenseSet<BaseTy> Set;
2743   };
2744 
2745   SetState() : Known(false), Assumed(true), IsAtFixedpoint(false) {}
2746 
2747   /// Initializes the known state with an initial set and initializes the
2748   /// assumed state as universal.
2749   SetState(const DenseSet<BaseTy> &Known)
2750       : Known(Known), Assumed(true), IsAtFixedpoint(false) {}
2751 
2752   /// See AbstractState::isValidState()
2753   bool isValidState() const override { return !Assumed.empty(); }
2754 
2755   /// See AbstractState::isAtFixpoint()
2756   bool isAtFixpoint() const override { return IsAtFixedpoint; }
2757 
2758   /// See AbstractState::indicateOptimisticFixpoint(...)
2759   ChangeStatus indicateOptimisticFixpoint() override {
2760     IsAtFixedpoint = true;
2761     Known = Assumed;
2762     return ChangeStatus::UNCHANGED;
2763   }
2764 
2765   /// See AbstractState::indicatePessimisticFixpoint(...)
2766   ChangeStatus indicatePessimisticFixpoint() override {
2767     IsAtFixedpoint = true;
2768     Assumed = Known;
2769     return ChangeStatus::CHANGED;
2770   }
2771 
2772   /// Return the known state encoding.
2773   const SetContents &getKnown() const { return Known; }
2774 
2775   /// Return the assumed state encoding.
2776   const SetContents &getAssumed() const { return Assumed; }
2777 
2778   /// Returns if the set state contains the element.
2779   bool setContains(const BaseTy &Elem) const {
2780     return Assumed.getSet().contains(Elem) || Known.getSet().contains(Elem);
2781   }
2782 
2783   /// Performs the set intersection between this set and \p RHS. Returns true if
2784   /// changes were made.
2785   bool getIntersection(const SetContents &RHS) {
2786     unsigned SizeBefore = Assumed.getSet().size();
2787 
2788     // Get intersection and make sure that the known set is still a proper
2789     // subset of the assumed set. A := K u (A ^ R).
2790     Assumed.getIntersection(RHS);
2791     Assumed.getUnion(Known);
2792 
2793     return SizeBefore != Assumed.getSet().size();
2794   }
2795 
2796   /// Performs the set union between this set and \p RHS. Returns true if
2797   /// changes were made.
2798   bool getUnion(const SetContents &RHS) { return Assumed.getUnion(RHS); }
2799 
2800 private:
2801   /// The set of values known for this state.
2802   SetContents Known;
2803 
2804   /// The set of assumed values for this state.
2805   SetContents Assumed;
2806 
2807   bool IsAtFixedpoint;
2808 };
2809 
2810 /// Helper struct necessary as the modular build fails if the virtual method
2811 /// IRAttribute::manifest is defined in the Attributor.cpp.
2812 struct IRAttributeManifest {
2813   static ChangeStatus manifestAttrs(Attributor &A, const IRPosition &IRP,
2814                                     const ArrayRef<Attribute> &DeducedAttrs,
2815                                     bool ForceReplace = false);
2816 };
2817 
2818 /// Helper to tie a abstract state implementation to an abstract attribute.
2819 template <typename StateTy, typename BaseType, class... Ts>
2820 struct StateWrapper : public BaseType, public StateTy {
2821   /// Provide static access to the type of the state.
2822   using StateType = StateTy;
2823 
2824   StateWrapper(const IRPosition &IRP, Ts... Args)
2825       : BaseType(IRP), StateTy(Args...) {}
2826 
2827   /// See AbstractAttribute::getState(...).
2828   StateType &getState() override { return *this; }
2829 
2830   /// See AbstractAttribute::getState(...).
2831   const StateType &getState() const override { return *this; }
2832 };
2833 
2834 /// Helper class that provides common functionality to manifest IR attributes.
2835 template <Attribute::AttrKind AK, typename BaseType>
2836 struct IRAttribute : public BaseType {
2837   IRAttribute(const IRPosition &IRP) : BaseType(IRP) {}
2838 
2839   /// See AbstractAttribute::initialize(...).
2840   void initialize(Attributor &A) override {
2841     const IRPosition &IRP = this->getIRPosition();
2842     if (isa<UndefValue>(IRP.getAssociatedValue()) ||
2843         this->hasAttr(getAttrKind(), /* IgnoreSubsumingPositions */ false,
2844                       &A)) {
2845       this->getState().indicateOptimisticFixpoint();
2846       return;
2847     }
2848 
2849     bool IsFnInterface = IRP.isFnInterfaceKind();
2850     const Function *FnScope = IRP.getAnchorScope();
2851     // TODO: Not all attributes require an exact definition. Find a way to
2852     //       enable deduction for some but not all attributes in case the
2853     //       definition might be changed at runtime, see also
2854     //       http://lists.llvm.org/pipermail/llvm-dev/2018-February/121275.html.
2855     // TODO: We could always determine abstract attributes and if sufficient
2856     //       information was found we could duplicate the functions that do not
2857     //       have an exact definition.
2858     if (IsFnInterface && (!FnScope || !A.isFunctionIPOAmendable(*FnScope)))
2859       this->getState().indicatePessimisticFixpoint();
2860   }
2861 
2862   /// See AbstractAttribute::manifest(...).
2863   ChangeStatus manifest(Attributor &A) override {
2864     if (isa<UndefValue>(this->getIRPosition().getAssociatedValue()))
2865       return ChangeStatus::UNCHANGED;
2866     SmallVector<Attribute, 4> DeducedAttrs;
2867     getDeducedAttributes(this->getAnchorValue().getContext(), DeducedAttrs);
2868     return IRAttributeManifest::manifestAttrs(A, this->getIRPosition(),
2869                                               DeducedAttrs);
2870   }
2871 
2872   /// Return the kind that identifies the abstract attribute implementation.
2873   Attribute::AttrKind getAttrKind() const { return AK; }
2874 
2875   /// Return the deduced attributes in \p Attrs.
2876   virtual void getDeducedAttributes(LLVMContext &Ctx,
2877                                     SmallVectorImpl<Attribute> &Attrs) const {
2878     Attrs.emplace_back(Attribute::get(Ctx, getAttrKind()));
2879   }
2880 };
2881 
2882 /// Base struct for all "concrete attribute" deductions.
2883 ///
2884 /// The abstract attribute is a minimal interface that allows the Attributor to
2885 /// orchestrate the abstract/fixpoint analysis. The design allows to hide away
2886 /// implementation choices made for the subclasses but also to structure their
2887 /// implementation and simplify the use of other abstract attributes in-flight.
2888 ///
2889 /// To allow easy creation of new attributes, most methods have default
2890 /// implementations. The ones that do not are generally straight forward, except
2891 /// `AbstractAttribute::updateImpl` which is the location of most reasoning
2892 /// associated with the abstract attribute. The update is invoked by the
2893 /// Attributor in case the situation used to justify the current optimistic
2894 /// state might have changed. The Attributor determines this automatically
2895 /// by monitoring the `Attributor::getAAFor` calls made by abstract attributes.
2896 ///
2897 /// The `updateImpl` method should inspect the IR and other abstract attributes
2898 /// in-flight to justify the best possible (=optimistic) state. The actual
2899 /// implementation is, similar to the underlying abstract state encoding, not
2900 /// exposed. In the most common case, the `updateImpl` will go through a list of
2901 /// reasons why its optimistic state is valid given the current information. If
2902 /// any combination of them holds and is sufficient to justify the current
2903 /// optimistic state, the method shall return UNCHAGED. If not, the optimistic
2904 /// state is adjusted to the situation and the method shall return CHANGED.
2905 ///
2906 /// If the manifestation of the "concrete attribute" deduced by the subclass
2907 /// differs from the "default" behavior, which is a (set of) LLVM-IR
2908 /// attribute(s) for an argument, call site argument, function return value, or
2909 /// function, the `AbstractAttribute::manifest` method should be overloaded.
2910 ///
2911 /// NOTE: If the state obtained via getState() is INVALID, thus if
2912 ///       AbstractAttribute::getState().isValidState() returns false, no
2913 ///       information provided by the methods of this class should be used.
2914 /// NOTE: The Attributor currently has certain limitations to what we can do.
2915 ///       As a general rule of thumb, "concrete" abstract attributes should *for
2916 ///       now* only perform "backward" information propagation. That means
2917 ///       optimistic information obtained through abstract attributes should
2918 ///       only be used at positions that precede the origin of the information
2919 ///       with regards to the program flow. More practically, information can
2920 ///       *now* be propagated from instructions to their enclosing function, but
2921 ///       *not* from call sites to the called function. The mechanisms to allow
2922 ///       both directions will be added in the future.
2923 /// NOTE: The mechanics of adding a new "concrete" abstract attribute are
2924 ///       described in the file comment.
2925 struct AbstractAttribute : public IRPosition, public AADepGraphNode {
2926   using StateType = AbstractState;
2927 
2928   AbstractAttribute(const IRPosition &IRP) : IRPosition(IRP) {}
2929 
2930   /// Virtual destructor.
2931   virtual ~AbstractAttribute() = default;
2932 
2933   /// This function is used to identify if an \p DGN is of type
2934   /// AbstractAttribute so that the dyn_cast and cast can use such information
2935   /// to cast an AADepGraphNode to an AbstractAttribute.
2936   ///
2937   /// We eagerly return true here because all AADepGraphNodes except for the
2938   /// Synthethis Node are of type AbstractAttribute
2939   static bool classof(const AADepGraphNode *DGN) { return true; }
2940 
2941   /// Initialize the state with the information in the Attributor \p A.
2942   ///
2943   /// This function is called by the Attributor once all abstract attributes
2944   /// have been identified. It can and shall be used for task like:
2945   ///  - identify existing knowledge in the IR and use it for the "known state"
2946   ///  - perform any work that is not going to change over time, e.g., determine
2947   ///    a subset of the IR, or attributes in-flight, that have to be looked at
2948   ///    in the `updateImpl` method.
2949   virtual void initialize(Attributor &A) {}
2950 
2951   /// A query AA is always scheduled as long as we do updates because it does
2952   /// lazy computation that cannot be determined to be done from the outside.
2953   /// However, while query AAs will not be fixed if they do not have outstanding
2954   /// dependences, we will only schedule them like other AAs. If a query AA that
2955   /// received a new query it needs to request an update via
2956   /// `Attributor::requestUpdateForAA`.
2957   virtual bool isQueryAA() const { return false; }
2958 
2959   /// Return the internal abstract state for inspection.
2960   virtual StateType &getState() = 0;
2961   virtual const StateType &getState() const = 0;
2962 
2963   /// Return an IR position, see struct IRPosition.
2964   const IRPosition &getIRPosition() const { return *this; };
2965   IRPosition &getIRPosition() { return *this; };
2966 
2967   /// Helper functions, for debug purposes only.
2968   ///{
2969   void print(raw_ostream &OS) const override;
2970   virtual void printWithDeps(raw_ostream &OS) const;
2971   void dump() const { print(dbgs()); }
2972 
2973   /// This function should return the "summarized" assumed state as string.
2974   virtual const std::string getAsStr() const = 0;
2975 
2976   /// This function should return the name of the AbstractAttribute
2977   virtual const std::string getName() const = 0;
2978 
2979   /// This function should return the address of the ID of the AbstractAttribute
2980   virtual const char *getIdAddr() const = 0;
2981   ///}
2982 
2983   /// Allow the Attributor access to the protected methods.
2984   friend struct Attributor;
2985 
2986 protected:
2987   /// Hook for the Attributor to trigger an update of the internal state.
2988   ///
2989   /// If this attribute is already fixed, this method will return UNCHANGED,
2990   /// otherwise it delegates to `AbstractAttribute::updateImpl`.
2991   ///
2992   /// \Return CHANGED if the internal state changed, otherwise UNCHANGED.
2993   ChangeStatus update(Attributor &A);
2994 
2995   /// Hook for the Attributor to trigger the manifestation of the information
2996   /// represented by the abstract attribute in the LLVM-IR.
2997   ///
2998   /// \Return CHANGED if the IR was altered, otherwise UNCHANGED.
2999   virtual ChangeStatus manifest(Attributor &A) {
3000     return ChangeStatus::UNCHANGED;
3001   }
3002 
3003   /// Hook to enable custom statistic tracking, called after manifest that
3004   /// resulted in a change if statistics are enabled.
3005   ///
3006   /// We require subclasses to provide an implementation so we remember to
3007   /// add statistics for them.
3008   virtual void trackStatistics() const = 0;
3009 
3010   /// The actual update/transfer function which has to be implemented by the
3011   /// derived classes.
3012   ///
3013   /// If it is called, the environment has changed and we have to determine if
3014   /// the current information is still valid or adjust it otherwise.
3015   ///
3016   /// \Return CHANGED if the internal state changed, otherwise UNCHANGED.
3017   virtual ChangeStatus updateImpl(Attributor &A) = 0;
3018 };
3019 
3020 /// Forward declarations of output streams for debug purposes.
3021 ///
3022 ///{
3023 raw_ostream &operator<<(raw_ostream &OS, const AbstractAttribute &AA);
3024 raw_ostream &operator<<(raw_ostream &OS, ChangeStatus S);
3025 raw_ostream &operator<<(raw_ostream &OS, IRPosition::Kind);
3026 raw_ostream &operator<<(raw_ostream &OS, const IRPosition &);
3027 raw_ostream &operator<<(raw_ostream &OS, const AbstractState &State);
3028 template <typename base_ty, base_ty BestState, base_ty WorstState>
3029 raw_ostream &
3030 operator<<(raw_ostream &OS,
3031            const IntegerStateBase<base_ty, BestState, WorstState> &S) {
3032   return OS << "(" << S.getKnown() << "-" << S.getAssumed() << ")"
3033             << static_cast<const AbstractState &>(S);
3034 }
3035 raw_ostream &operator<<(raw_ostream &OS, const IntegerRangeState &State);
3036 ///}
3037 
3038 struct AttributorPass : public PassInfoMixin<AttributorPass> {
3039   PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM);
3040 };
3041 struct AttributorCGSCCPass : public PassInfoMixin<AttributorCGSCCPass> {
3042   PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
3043                         LazyCallGraph &CG, CGSCCUpdateResult &UR);
3044 };
3045 
3046 Pass *createAttributorLegacyPass();
3047 Pass *createAttributorCGSCCLegacyPass();
3048 
3049 /// Helper function to clamp a state \p S of type \p StateType with the
3050 /// information in \p R and indicate/return if \p S did change (as-in update is
3051 /// required to be run again).
3052 template <typename StateType>
3053 ChangeStatus clampStateAndIndicateChange(StateType &S, const StateType &R) {
3054   auto Assumed = S.getAssumed();
3055   S ^= R;
3056   return Assumed == S.getAssumed() ? ChangeStatus::UNCHANGED
3057                                    : ChangeStatus::CHANGED;
3058 }
3059 
3060 /// ----------------------------------------------------------------------------
3061 ///                       Abstract Attribute Classes
3062 /// ----------------------------------------------------------------------------
3063 
3064 /// An abstract attribute for the returned values of a function.
3065 struct AAReturnedValues
3066     : public IRAttribute<Attribute::Returned, AbstractAttribute> {
3067   AAReturnedValues(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3068 
3069   /// Return an assumed unique return value if a single candidate is found. If
3070   /// there cannot be one, return a nullptr. If it is not clear yet, return the
3071   /// Optional::NoneType.
3072   Optional<Value *> getAssumedUniqueReturnValue(Attributor &A) const;
3073 
3074   /// Check \p Pred on all returned values.
3075   ///
3076   /// This method will evaluate \p Pred on returned values and return
3077   /// true if (1) all returned values are known, and (2) \p Pred returned true
3078   /// for all returned values.
3079   ///
3080   /// Note: Unlike the Attributor::checkForAllReturnedValuesAndReturnInsts
3081   /// method, this one will not filter dead return instructions.
3082   virtual bool checkForAllReturnedValuesAndReturnInsts(
3083       function_ref<bool(Value &, const SmallSetVector<ReturnInst *, 4> &)> Pred)
3084       const = 0;
3085 
3086   using iterator =
3087       MapVector<Value *, SmallSetVector<ReturnInst *, 4>>::iterator;
3088   using const_iterator =
3089       MapVector<Value *, SmallSetVector<ReturnInst *, 4>>::const_iterator;
3090   virtual llvm::iterator_range<iterator> returned_values() = 0;
3091   virtual llvm::iterator_range<const_iterator> returned_values() const = 0;
3092 
3093   virtual size_t getNumReturnValues() const = 0;
3094 
3095   /// Create an abstract attribute view for the position \p IRP.
3096   static AAReturnedValues &createForPosition(const IRPosition &IRP,
3097                                              Attributor &A);
3098 
3099   /// See AbstractAttribute::getName()
3100   const std::string getName() const override { return "AAReturnedValues"; }
3101 
3102   /// See AbstractAttribute::getIdAddr()
3103   const char *getIdAddr() const override { return &ID; }
3104 
3105   /// This function should return true if the type of the \p AA is
3106   /// AAReturnedValues
3107   static bool classof(const AbstractAttribute *AA) {
3108     return (AA->getIdAddr() == &ID);
3109   }
3110 
3111   /// Unique ID (due to the unique address)
3112   static const char ID;
3113 };
3114 
3115 struct AANoUnwind
3116     : public IRAttribute<Attribute::NoUnwind,
3117                          StateWrapper<BooleanState, AbstractAttribute>> {
3118   AANoUnwind(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3119 
3120   /// Returns true if nounwind is assumed.
3121   bool isAssumedNoUnwind() const { return getAssumed(); }
3122 
3123   /// Returns true if nounwind is known.
3124   bool isKnownNoUnwind() const { return getKnown(); }
3125 
3126   /// Create an abstract attribute view for the position \p IRP.
3127   static AANoUnwind &createForPosition(const IRPosition &IRP, Attributor &A);
3128 
3129   /// See AbstractAttribute::getName()
3130   const std::string getName() const override { return "AANoUnwind"; }
3131 
3132   /// See AbstractAttribute::getIdAddr()
3133   const char *getIdAddr() const override { return &ID; }
3134 
3135   /// This function should return true if the type of the \p AA is AANoUnwind
3136   static bool classof(const AbstractAttribute *AA) {
3137     return (AA->getIdAddr() == &ID);
3138   }
3139 
3140   /// Unique ID (due to the unique address)
3141   static const char ID;
3142 };
3143 
3144 struct AANoSync
3145     : public IRAttribute<Attribute::NoSync,
3146                          StateWrapper<BooleanState, AbstractAttribute>> {
3147   AANoSync(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3148 
3149   /// Returns true if "nosync" is assumed.
3150   bool isAssumedNoSync() const { return getAssumed(); }
3151 
3152   /// Returns true if "nosync" is known.
3153   bool isKnownNoSync() const { return getKnown(); }
3154 
3155   /// Helper function used to determine whether an instruction is non-relaxed
3156   /// atomic. In other words, if an atomic instruction does not have unordered
3157   /// or monotonic ordering
3158   static bool isNonRelaxedAtomic(const Instruction *I);
3159 
3160   /// Helper function specific for intrinsics which are potentially volatile.
3161   static bool isNoSyncIntrinsic(const Instruction *I);
3162 
3163   /// Create an abstract attribute view for the position \p IRP.
3164   static AANoSync &createForPosition(const IRPosition &IRP, Attributor &A);
3165 
3166   /// See AbstractAttribute::getName()
3167   const std::string getName() const override { return "AANoSync"; }
3168 
3169   /// See AbstractAttribute::getIdAddr()
3170   const char *getIdAddr() const override { return &ID; }
3171 
3172   /// This function should return true if the type of the \p AA is AANoSync
3173   static bool classof(const AbstractAttribute *AA) {
3174     return (AA->getIdAddr() == &ID);
3175   }
3176 
3177   /// Unique ID (due to the unique address)
3178   static const char ID;
3179 };
3180 
3181 /// An abstract interface for all nonnull attributes.
3182 struct AANonNull
3183     : public IRAttribute<Attribute::NonNull,
3184                          StateWrapper<BooleanState, AbstractAttribute>> {
3185   AANonNull(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3186 
3187   /// Return true if we assume that the underlying value is nonnull.
3188   bool isAssumedNonNull() const { return getAssumed(); }
3189 
3190   /// Return true if we know that underlying value is nonnull.
3191   bool isKnownNonNull() const { return getKnown(); }
3192 
3193   /// Create an abstract attribute view for the position \p IRP.
3194   static AANonNull &createForPosition(const IRPosition &IRP, Attributor &A);
3195 
3196   /// See AbstractAttribute::getName()
3197   const std::string getName() const override { return "AANonNull"; }
3198 
3199   /// See AbstractAttribute::getIdAddr()
3200   const char *getIdAddr() const override { return &ID; }
3201 
3202   /// This function should return true if the type of the \p AA is AANonNull
3203   static bool classof(const AbstractAttribute *AA) {
3204     return (AA->getIdAddr() == &ID);
3205   }
3206 
3207   /// Unique ID (due to the unique address)
3208   static const char ID;
3209 };
3210 
3211 /// An abstract attribute for norecurse.
3212 struct AANoRecurse
3213     : public IRAttribute<Attribute::NoRecurse,
3214                          StateWrapper<BooleanState, AbstractAttribute>> {
3215   AANoRecurse(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3216 
3217   /// Return true if "norecurse" is assumed.
3218   bool isAssumedNoRecurse() const { return getAssumed(); }
3219 
3220   /// Return true if "norecurse" is known.
3221   bool isKnownNoRecurse() const { return getKnown(); }
3222 
3223   /// Create an abstract attribute view for the position \p IRP.
3224   static AANoRecurse &createForPosition(const IRPosition &IRP, Attributor &A);
3225 
3226   /// See AbstractAttribute::getName()
3227   const std::string getName() const override { return "AANoRecurse"; }
3228 
3229   /// See AbstractAttribute::getIdAddr()
3230   const char *getIdAddr() const override { return &ID; }
3231 
3232   /// This function should return true if the type of the \p AA is AANoRecurse
3233   static bool classof(const AbstractAttribute *AA) {
3234     return (AA->getIdAddr() == &ID);
3235   }
3236 
3237   /// Unique ID (due to the unique address)
3238   static const char ID;
3239 };
3240 
3241 /// An abstract attribute for willreturn.
3242 struct AAWillReturn
3243     : public IRAttribute<Attribute::WillReturn,
3244                          StateWrapper<BooleanState, AbstractAttribute>> {
3245   AAWillReturn(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3246 
3247   /// Return true if "willreturn" is assumed.
3248   bool isAssumedWillReturn() const { return getAssumed(); }
3249 
3250   /// Return true if "willreturn" is known.
3251   bool isKnownWillReturn() const { return getKnown(); }
3252 
3253   /// Create an abstract attribute view for the position \p IRP.
3254   static AAWillReturn &createForPosition(const IRPosition &IRP, Attributor &A);
3255 
3256   /// See AbstractAttribute::getName()
3257   const std::string getName() const override { return "AAWillReturn"; }
3258 
3259   /// See AbstractAttribute::getIdAddr()
3260   const char *getIdAddr() const override { return &ID; }
3261 
3262   /// This function should return true if the type of the \p AA is AAWillReturn
3263   static bool classof(const AbstractAttribute *AA) {
3264     return (AA->getIdAddr() == &ID);
3265   }
3266 
3267   /// Unique ID (due to the unique address)
3268   static const char ID;
3269 };
3270 
3271 /// An abstract attribute for undefined behavior.
3272 struct AAUndefinedBehavior
3273     : public StateWrapper<BooleanState, AbstractAttribute> {
3274   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3275   AAUndefinedBehavior(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3276 
3277   /// Return true if "undefined behavior" is assumed.
3278   bool isAssumedToCauseUB() const { return getAssumed(); }
3279 
3280   /// Return true if "undefined behavior" is assumed for a specific instruction.
3281   virtual bool isAssumedToCauseUB(Instruction *I) const = 0;
3282 
3283   /// Return true if "undefined behavior" is known.
3284   bool isKnownToCauseUB() const { return getKnown(); }
3285 
3286   /// Return true if "undefined behavior" is known for a specific instruction.
3287   virtual bool isKnownToCauseUB(Instruction *I) const = 0;
3288 
3289   /// Create an abstract attribute view for the position \p IRP.
3290   static AAUndefinedBehavior &createForPosition(const IRPosition &IRP,
3291                                                 Attributor &A);
3292 
3293   /// See AbstractAttribute::getName()
3294   const std::string getName() const override { return "AAUndefinedBehavior"; }
3295 
3296   /// See AbstractAttribute::getIdAddr()
3297   const char *getIdAddr() const override { return &ID; }
3298 
3299   /// This function should return true if the type of the \p AA is
3300   /// AAUndefineBehavior
3301   static bool classof(const AbstractAttribute *AA) {
3302     return (AA->getIdAddr() == &ID);
3303   }
3304 
3305   /// Unique ID (due to the unique address)
3306   static const char ID;
3307 };
3308 
3309 /// An abstract interface to determine reachability of point A to B.
3310 struct AAReachability : public StateWrapper<BooleanState, AbstractAttribute> {
3311   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3312   AAReachability(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3313 
3314   /// Returns true if 'From' instruction is assumed to reach, 'To' instruction.
3315   /// Users should provide two positions they are interested in, and the class
3316   /// determines (and caches) reachability.
3317   bool isAssumedReachable(Attributor &A, const Instruction &From,
3318                           const Instruction &To) const {
3319     if (!getState().isValidState())
3320       return true;
3321     return A.getInfoCache().getPotentiallyReachable(From, To);
3322   }
3323 
3324   /// Returns true if 'From' instruction is known to reach, 'To' instruction.
3325   /// Users should provide two positions they are interested in, and the class
3326   /// determines (and caches) reachability.
3327   bool isKnownReachable(Attributor &A, const Instruction &From,
3328                         const Instruction &To) const {
3329     if (!getState().isValidState())
3330       return false;
3331     return A.getInfoCache().getPotentiallyReachable(From, To);
3332   }
3333 
3334   /// Create an abstract attribute view for the position \p IRP.
3335   static AAReachability &createForPosition(const IRPosition &IRP,
3336                                            Attributor &A);
3337 
3338   /// See AbstractAttribute::getName()
3339   const std::string getName() const override { return "AAReachability"; }
3340 
3341   /// See AbstractAttribute::getIdAddr()
3342   const char *getIdAddr() const override { return &ID; }
3343 
3344   /// This function should return true if the type of the \p AA is
3345   /// AAReachability
3346   static bool classof(const AbstractAttribute *AA) {
3347     return (AA->getIdAddr() == &ID);
3348   }
3349 
3350   /// Unique ID (due to the unique address)
3351   static const char ID;
3352 };
3353 
3354 /// An abstract interface for all noalias attributes.
3355 struct AANoAlias
3356     : public IRAttribute<Attribute::NoAlias,
3357                          StateWrapper<BooleanState, AbstractAttribute>> {
3358   AANoAlias(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3359 
3360   /// Return true if we assume that the underlying value is alias.
3361   bool isAssumedNoAlias() const { return getAssumed(); }
3362 
3363   /// Return true if we know that underlying value is noalias.
3364   bool isKnownNoAlias() const { return getKnown(); }
3365 
3366   /// Create an abstract attribute view for the position \p IRP.
3367   static AANoAlias &createForPosition(const IRPosition &IRP, Attributor &A);
3368 
3369   /// See AbstractAttribute::getName()
3370   const std::string getName() const override { return "AANoAlias"; }
3371 
3372   /// See AbstractAttribute::getIdAddr()
3373   const char *getIdAddr() const override { return &ID; }
3374 
3375   /// This function should return true if the type of the \p AA is AANoAlias
3376   static bool classof(const AbstractAttribute *AA) {
3377     return (AA->getIdAddr() == &ID);
3378   }
3379 
3380   /// Unique ID (due to the unique address)
3381   static const char ID;
3382 };
3383 
3384 /// An AbstractAttribute for nofree.
3385 struct AANoFree
3386     : public IRAttribute<Attribute::NoFree,
3387                          StateWrapper<BooleanState, AbstractAttribute>> {
3388   AANoFree(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3389 
3390   /// Return true if "nofree" is assumed.
3391   bool isAssumedNoFree() const { return getAssumed(); }
3392 
3393   /// Return true if "nofree" is known.
3394   bool isKnownNoFree() const { return getKnown(); }
3395 
3396   /// Create an abstract attribute view for the position \p IRP.
3397   static AANoFree &createForPosition(const IRPosition &IRP, Attributor &A);
3398 
3399   /// See AbstractAttribute::getName()
3400   const std::string getName() const override { return "AANoFree"; }
3401 
3402   /// See AbstractAttribute::getIdAddr()
3403   const char *getIdAddr() const override { return &ID; }
3404 
3405   /// This function should return true if the type of the \p AA is AANoFree
3406   static bool classof(const AbstractAttribute *AA) {
3407     return (AA->getIdAddr() == &ID);
3408   }
3409 
3410   /// Unique ID (due to the unique address)
3411   static const char ID;
3412 };
3413 
3414 /// An AbstractAttribute for noreturn.
3415 struct AANoReturn
3416     : public IRAttribute<Attribute::NoReturn,
3417                          StateWrapper<BooleanState, AbstractAttribute>> {
3418   AANoReturn(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3419 
3420   /// Return true if the underlying object is assumed to never return.
3421   bool isAssumedNoReturn() const { return getAssumed(); }
3422 
3423   /// Return true if the underlying object is known to never return.
3424   bool isKnownNoReturn() const { return getKnown(); }
3425 
3426   /// Create an abstract attribute view for the position \p IRP.
3427   static AANoReturn &createForPosition(const IRPosition &IRP, Attributor &A);
3428 
3429   /// See AbstractAttribute::getName()
3430   const std::string getName() const override { return "AANoReturn"; }
3431 
3432   /// See AbstractAttribute::getIdAddr()
3433   const char *getIdAddr() const override { return &ID; }
3434 
3435   /// This function should return true if the type of the \p AA is AANoReturn
3436   static bool classof(const AbstractAttribute *AA) {
3437     return (AA->getIdAddr() == &ID);
3438   }
3439 
3440   /// Unique ID (due to the unique address)
3441   static const char ID;
3442 };
3443 
3444 /// An abstract interface for liveness abstract attribute.
3445 struct AAIsDead
3446     : public StateWrapper<BitIntegerState<uint8_t, 3, 0>, AbstractAttribute> {
3447   using Base = StateWrapper<BitIntegerState<uint8_t, 3, 0>, AbstractAttribute>;
3448   AAIsDead(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3449 
3450   /// State encoding bits. A set bit in the state means the property holds.
3451   enum {
3452     HAS_NO_EFFECT = 1 << 0,
3453     IS_REMOVABLE = 1 << 1,
3454 
3455     IS_DEAD = HAS_NO_EFFECT | IS_REMOVABLE,
3456   };
3457   static_assert(IS_DEAD == getBestState(), "Unexpected BEST_STATE value");
3458 
3459 protected:
3460   /// The query functions are protected such that other attributes need to go
3461   /// through the Attributor interfaces: `Attributor::isAssumedDead(...)`
3462 
3463   /// Returns true if the underlying value is assumed dead.
3464   virtual bool isAssumedDead() const = 0;
3465 
3466   /// Returns true if the underlying value is known dead.
3467   virtual bool isKnownDead() const = 0;
3468 
3469   /// Returns true if \p BB is assumed dead.
3470   virtual bool isAssumedDead(const BasicBlock *BB) const = 0;
3471 
3472   /// Returns true if \p BB is known dead.
3473   virtual bool isKnownDead(const BasicBlock *BB) const = 0;
3474 
3475   /// Returns true if \p I is assumed dead.
3476   virtual bool isAssumedDead(const Instruction *I) const = 0;
3477 
3478   /// Returns true if \p I is known dead.
3479   virtual bool isKnownDead(const Instruction *I) const = 0;
3480 
3481   /// Return true if the underlying value is a store that is known to be
3482   /// removable. This is different from dead stores as the removable store
3483   /// can have an effect on live values, especially loads, but that effect
3484   /// is propagated which allows us to remove the store in turn.
3485   virtual bool isRemovableStore() const { return false; }
3486 
3487   /// This method is used to check if at least one instruction in a collection
3488   /// of instructions is live.
3489   template <typename T> bool isLiveInstSet(T begin, T end) const {
3490     for (const auto &I : llvm::make_range(begin, end)) {
3491       assert(I->getFunction() == getIRPosition().getAssociatedFunction() &&
3492              "Instruction must be in the same anchor scope function.");
3493 
3494       if (!isAssumedDead(I))
3495         return true;
3496     }
3497 
3498     return false;
3499   }
3500 
3501 public:
3502   /// Create an abstract attribute view for the position \p IRP.
3503   static AAIsDead &createForPosition(const IRPosition &IRP, Attributor &A);
3504 
3505   /// Determine if \p F might catch asynchronous exceptions.
3506   static bool mayCatchAsynchronousExceptions(const Function &F) {
3507     return F.hasPersonalityFn() && !canSimplifyInvokeNoUnwind(&F);
3508   }
3509 
3510   /// Return if the edge from \p From BB to \p To BB is assumed dead.
3511   /// This is specifically useful in AAReachability.
3512   virtual bool isEdgeDead(const BasicBlock *From, const BasicBlock *To) const {
3513     return false;
3514   }
3515 
3516   /// See AbstractAttribute::getName()
3517   const std::string getName() const override { return "AAIsDead"; }
3518 
3519   /// See AbstractAttribute::getIdAddr()
3520   const char *getIdAddr() const override { return &ID; }
3521 
3522   /// This function should return true if the type of the \p AA is AAIsDead
3523   static bool classof(const AbstractAttribute *AA) {
3524     return (AA->getIdAddr() == &ID);
3525   }
3526 
3527   /// Unique ID (due to the unique address)
3528   static const char ID;
3529 
3530   friend struct Attributor;
3531 };
3532 
3533 /// State for dereferenceable attribute
3534 struct DerefState : AbstractState {
3535 
3536   static DerefState getBestState() { return DerefState(); }
3537   static DerefState getBestState(const DerefState &) { return getBestState(); }
3538 
3539   /// Return the worst possible representable state.
3540   static DerefState getWorstState() {
3541     DerefState DS;
3542     DS.indicatePessimisticFixpoint();
3543     return DS;
3544   }
3545   static DerefState getWorstState(const DerefState &) {
3546     return getWorstState();
3547   }
3548 
3549   /// State representing for dereferenceable bytes.
3550   IncIntegerState<> DerefBytesState;
3551 
3552   /// Map representing for accessed memory offsets and sizes.
3553   /// A key is Offset and a value is size.
3554   /// If there is a load/store instruction something like,
3555   ///   p[offset] = v;
3556   /// (offset, sizeof(v)) will be inserted to this map.
3557   /// std::map is used because we want to iterate keys in ascending order.
3558   std::map<int64_t, uint64_t> AccessedBytesMap;
3559 
3560   /// Helper function to calculate dereferenceable bytes from current known
3561   /// bytes and accessed bytes.
3562   ///
3563   /// int f(int *A){
3564   ///    *A = 0;
3565   ///    *(A+2) = 2;
3566   ///    *(A+1) = 1;
3567   ///    *(A+10) = 10;
3568   /// }
3569   /// ```
3570   /// In that case, AccessedBytesMap is `{0:4, 4:4, 8:4, 40:4}`.
3571   /// AccessedBytesMap is std::map so it is iterated in accending order on
3572   /// key(Offset). So KnownBytes will be updated like this:
3573   ///
3574   /// |Access | KnownBytes
3575   /// |(0, 4)| 0 -> 4
3576   /// |(4, 4)| 4 -> 8
3577   /// |(8, 4)| 8 -> 12
3578   /// |(40, 4) | 12 (break)
3579   void computeKnownDerefBytesFromAccessedMap() {
3580     int64_t KnownBytes = DerefBytesState.getKnown();
3581     for (auto &Access : AccessedBytesMap) {
3582       if (KnownBytes < Access.first)
3583         break;
3584       KnownBytes = std::max(KnownBytes, Access.first + (int64_t)Access.second);
3585     }
3586 
3587     DerefBytesState.takeKnownMaximum(KnownBytes);
3588   }
3589 
3590   /// State representing that whether the value is globaly dereferenceable.
3591   BooleanState GlobalState;
3592 
3593   /// See AbstractState::isValidState()
3594   bool isValidState() const override { return DerefBytesState.isValidState(); }
3595 
3596   /// See AbstractState::isAtFixpoint()
3597   bool isAtFixpoint() const override {
3598     return !isValidState() ||
3599            (DerefBytesState.isAtFixpoint() && GlobalState.isAtFixpoint());
3600   }
3601 
3602   /// See AbstractState::indicateOptimisticFixpoint(...)
3603   ChangeStatus indicateOptimisticFixpoint() override {
3604     DerefBytesState.indicateOptimisticFixpoint();
3605     GlobalState.indicateOptimisticFixpoint();
3606     return ChangeStatus::UNCHANGED;
3607   }
3608 
3609   /// See AbstractState::indicatePessimisticFixpoint(...)
3610   ChangeStatus indicatePessimisticFixpoint() override {
3611     DerefBytesState.indicatePessimisticFixpoint();
3612     GlobalState.indicatePessimisticFixpoint();
3613     return ChangeStatus::CHANGED;
3614   }
3615 
3616   /// Update known dereferenceable bytes.
3617   void takeKnownDerefBytesMaximum(uint64_t Bytes) {
3618     DerefBytesState.takeKnownMaximum(Bytes);
3619 
3620     // Known bytes might increase.
3621     computeKnownDerefBytesFromAccessedMap();
3622   }
3623 
3624   /// Update assumed dereferenceable bytes.
3625   void takeAssumedDerefBytesMinimum(uint64_t Bytes) {
3626     DerefBytesState.takeAssumedMinimum(Bytes);
3627   }
3628 
3629   /// Add accessed bytes to the map.
3630   void addAccessedBytes(int64_t Offset, uint64_t Size) {
3631     uint64_t &AccessedBytes = AccessedBytesMap[Offset];
3632     AccessedBytes = std::max(AccessedBytes, Size);
3633 
3634     // Known bytes might increase.
3635     computeKnownDerefBytesFromAccessedMap();
3636   }
3637 
3638   /// Equality for DerefState.
3639   bool operator==(const DerefState &R) const {
3640     return this->DerefBytesState == R.DerefBytesState &&
3641            this->GlobalState == R.GlobalState;
3642   }
3643 
3644   /// Inequality for DerefState.
3645   bool operator!=(const DerefState &R) const { return !(*this == R); }
3646 
3647   /// See IntegerStateBase::operator^=
3648   DerefState operator^=(const DerefState &R) {
3649     DerefBytesState ^= R.DerefBytesState;
3650     GlobalState ^= R.GlobalState;
3651     return *this;
3652   }
3653 
3654   /// See IntegerStateBase::operator+=
3655   DerefState operator+=(const DerefState &R) {
3656     DerefBytesState += R.DerefBytesState;
3657     GlobalState += R.GlobalState;
3658     return *this;
3659   }
3660 
3661   /// See IntegerStateBase::operator&=
3662   DerefState operator&=(const DerefState &R) {
3663     DerefBytesState &= R.DerefBytesState;
3664     GlobalState &= R.GlobalState;
3665     return *this;
3666   }
3667 
3668   /// See IntegerStateBase::operator|=
3669   DerefState operator|=(const DerefState &R) {
3670     DerefBytesState |= R.DerefBytesState;
3671     GlobalState |= R.GlobalState;
3672     return *this;
3673   }
3674 
3675 protected:
3676   const AANonNull *NonNullAA = nullptr;
3677 };
3678 
3679 /// An abstract interface for all dereferenceable attribute.
3680 struct AADereferenceable
3681     : public IRAttribute<Attribute::Dereferenceable,
3682                          StateWrapper<DerefState, AbstractAttribute>> {
3683   AADereferenceable(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3684 
3685   /// Return true if we assume that the underlying value is nonnull.
3686   bool isAssumedNonNull() const {
3687     return NonNullAA && NonNullAA->isAssumedNonNull();
3688   }
3689 
3690   /// Return true if we know that the underlying value is nonnull.
3691   bool isKnownNonNull() const {
3692     return NonNullAA && NonNullAA->isKnownNonNull();
3693   }
3694 
3695   /// Return true if we assume that underlying value is
3696   /// dereferenceable(_or_null) globally.
3697   bool isAssumedGlobal() const { return GlobalState.getAssumed(); }
3698 
3699   /// Return true if we know that underlying value is
3700   /// dereferenceable(_or_null) globally.
3701   bool isKnownGlobal() const { return GlobalState.getKnown(); }
3702 
3703   /// Return assumed dereferenceable bytes.
3704   uint32_t getAssumedDereferenceableBytes() const {
3705     return DerefBytesState.getAssumed();
3706   }
3707 
3708   /// Return known dereferenceable bytes.
3709   uint32_t getKnownDereferenceableBytes() const {
3710     return DerefBytesState.getKnown();
3711   }
3712 
3713   /// Create an abstract attribute view for the position \p IRP.
3714   static AADereferenceable &createForPosition(const IRPosition &IRP,
3715                                               Attributor &A);
3716 
3717   /// See AbstractAttribute::getName()
3718   const std::string getName() const override { return "AADereferenceable"; }
3719 
3720   /// See AbstractAttribute::getIdAddr()
3721   const char *getIdAddr() const override { return &ID; }
3722 
3723   /// This function should return true if the type of the \p AA is
3724   /// AADereferenceable
3725   static bool classof(const AbstractAttribute *AA) {
3726     return (AA->getIdAddr() == &ID);
3727   }
3728 
3729   /// Unique ID (due to the unique address)
3730   static const char ID;
3731 };
3732 
3733 using AAAlignmentStateType =
3734     IncIntegerState<uint64_t, Value::MaximumAlignment, 1>;
3735 /// An abstract interface for all align attributes.
3736 struct AAAlign : public IRAttribute<
3737                      Attribute::Alignment,
3738                      StateWrapper<AAAlignmentStateType, AbstractAttribute>> {
3739   AAAlign(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3740 
3741   /// Return assumed alignment.
3742   Align getAssumedAlign() const { return Align(getAssumed()); }
3743 
3744   /// Return known alignment.
3745   Align getKnownAlign() const { return Align(getKnown()); }
3746 
3747   /// See AbstractAttribute::getName()
3748   const std::string getName() const override { return "AAAlign"; }
3749 
3750   /// See AbstractAttribute::getIdAddr()
3751   const char *getIdAddr() const override { return &ID; }
3752 
3753   /// This function should return true if the type of the \p AA is AAAlign
3754   static bool classof(const AbstractAttribute *AA) {
3755     return (AA->getIdAddr() == &ID);
3756   }
3757 
3758   /// Create an abstract attribute view for the position \p IRP.
3759   static AAAlign &createForPosition(const IRPosition &IRP, Attributor &A);
3760 
3761   /// Unique ID (due to the unique address)
3762   static const char ID;
3763 };
3764 
3765 /// An abstract interface to track if a value leaves it's defining function
3766 /// instance.
3767 /// TODO: We should make it a ternary AA tracking uniqueness, and uniqueness
3768 /// wrt. the Attributor analysis separately.
3769 struct AAInstanceInfo : public StateWrapper<BooleanState, AbstractAttribute> {
3770   AAInstanceInfo(const IRPosition &IRP, Attributor &A)
3771       : StateWrapper<BooleanState, AbstractAttribute>(IRP) {}
3772 
3773   /// Return true if we know that the underlying value is unique in its scope
3774   /// wrt. the Attributor analysis. That means it might not be unique but we can
3775   /// still use pointer equality without risking to represent two instances with
3776   /// one `llvm::Value`.
3777   bool isKnownUniqueForAnalysis() const { return isKnown(); }
3778 
3779   /// Return true if we assume that the underlying value is unique in its scope
3780   /// wrt. the Attributor analysis. That means it might not be unique but we can
3781   /// still use pointer equality without risking to represent two instances with
3782   /// one `llvm::Value`.
3783   bool isAssumedUniqueForAnalysis() const { return isAssumed(); }
3784 
3785   /// Create an abstract attribute view for the position \p IRP.
3786   static AAInstanceInfo &createForPosition(const IRPosition &IRP,
3787                                            Attributor &A);
3788 
3789   /// See AbstractAttribute::getName()
3790   const std::string getName() const override { return "AAInstanceInfo"; }
3791 
3792   /// See AbstractAttribute::getIdAddr()
3793   const char *getIdAddr() const override { return &ID; }
3794 
3795   /// This function should return true if the type of the \p AA is
3796   /// AAInstanceInfo
3797   static bool classof(const AbstractAttribute *AA) {
3798     return (AA->getIdAddr() == &ID);
3799   }
3800 
3801   /// Unique ID (due to the unique address)
3802   static const char ID;
3803 };
3804 
3805 /// An abstract interface for all nocapture attributes.
3806 struct AANoCapture
3807     : public IRAttribute<
3808           Attribute::NoCapture,
3809           StateWrapper<BitIntegerState<uint16_t, 7, 0>, AbstractAttribute>> {
3810   AANoCapture(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3811 
3812   /// State encoding bits. A set bit in the state means the property holds.
3813   /// NO_CAPTURE is the best possible state, 0 the worst possible state.
3814   enum {
3815     NOT_CAPTURED_IN_MEM = 1 << 0,
3816     NOT_CAPTURED_IN_INT = 1 << 1,
3817     NOT_CAPTURED_IN_RET = 1 << 2,
3818 
3819     /// If we do not capture the value in memory or through integers we can only
3820     /// communicate it back as a derived pointer.
3821     NO_CAPTURE_MAYBE_RETURNED = NOT_CAPTURED_IN_MEM | NOT_CAPTURED_IN_INT,
3822 
3823     /// If we do not capture the value in memory, through integers, or as a
3824     /// derived pointer we know it is not captured.
3825     NO_CAPTURE =
3826         NOT_CAPTURED_IN_MEM | NOT_CAPTURED_IN_INT | NOT_CAPTURED_IN_RET,
3827   };
3828 
3829   /// Return true if we know that the underlying value is not captured in its
3830   /// respective scope.
3831   bool isKnownNoCapture() const { return isKnown(NO_CAPTURE); }
3832 
3833   /// Return true if we assume that the underlying value is not captured in its
3834   /// respective scope.
3835   bool isAssumedNoCapture() const { return isAssumed(NO_CAPTURE); }
3836 
3837   /// Return true if we know that the underlying value is not captured in its
3838   /// respective scope but we allow it to escape through a "return".
3839   bool isKnownNoCaptureMaybeReturned() const {
3840     return isKnown(NO_CAPTURE_MAYBE_RETURNED);
3841   }
3842 
3843   /// Return true if we assume that the underlying value is not captured in its
3844   /// respective scope but we allow it to escape through a "return".
3845   bool isAssumedNoCaptureMaybeReturned() const {
3846     return isAssumed(NO_CAPTURE_MAYBE_RETURNED);
3847   }
3848 
3849   /// Create an abstract attribute view for the position \p IRP.
3850   static AANoCapture &createForPosition(const IRPosition &IRP, Attributor &A);
3851 
3852   /// See AbstractAttribute::getName()
3853   const std::string getName() const override { return "AANoCapture"; }
3854 
3855   /// See AbstractAttribute::getIdAddr()
3856   const char *getIdAddr() const override { return &ID; }
3857 
3858   /// This function should return true if the type of the \p AA is AANoCapture
3859   static bool classof(const AbstractAttribute *AA) {
3860     return (AA->getIdAddr() == &ID);
3861   }
3862 
3863   /// Unique ID (due to the unique address)
3864   static const char ID;
3865 };
3866 
3867 struct ValueSimplifyStateType : public AbstractState {
3868 
3869   ValueSimplifyStateType(Type *Ty) : Ty(Ty) {}
3870 
3871   static ValueSimplifyStateType getBestState(Type *Ty) {
3872     return ValueSimplifyStateType(Ty);
3873   }
3874   static ValueSimplifyStateType getBestState(const ValueSimplifyStateType &VS) {
3875     return getBestState(VS.Ty);
3876   }
3877 
3878   /// Return the worst possible representable state.
3879   static ValueSimplifyStateType getWorstState(Type *Ty) {
3880     ValueSimplifyStateType DS(Ty);
3881     DS.indicatePessimisticFixpoint();
3882     return DS;
3883   }
3884   static ValueSimplifyStateType
3885   getWorstState(const ValueSimplifyStateType &VS) {
3886     return getWorstState(VS.Ty);
3887   }
3888 
3889   /// See AbstractState::isValidState(...)
3890   bool isValidState() const override { return BS.isValidState(); }
3891 
3892   /// See AbstractState::isAtFixpoint(...)
3893   bool isAtFixpoint() const override { return BS.isAtFixpoint(); }
3894 
3895   /// Return the assumed state encoding.
3896   ValueSimplifyStateType getAssumed() { return *this; }
3897   const ValueSimplifyStateType &getAssumed() const { return *this; }
3898 
3899   /// See AbstractState::indicatePessimisticFixpoint(...)
3900   ChangeStatus indicatePessimisticFixpoint() override {
3901     return BS.indicatePessimisticFixpoint();
3902   }
3903 
3904   /// See AbstractState::indicateOptimisticFixpoint(...)
3905   ChangeStatus indicateOptimisticFixpoint() override {
3906     return BS.indicateOptimisticFixpoint();
3907   }
3908 
3909   /// "Clamp" this state with \p PVS.
3910   ValueSimplifyStateType operator^=(const ValueSimplifyStateType &VS) {
3911     BS ^= VS.BS;
3912     unionAssumed(VS.SimplifiedAssociatedValue);
3913     return *this;
3914   }
3915 
3916   bool operator==(const ValueSimplifyStateType &RHS) const {
3917     if (isValidState() != RHS.isValidState())
3918       return false;
3919     if (!isValidState() && !RHS.isValidState())
3920       return true;
3921     return SimplifiedAssociatedValue == RHS.SimplifiedAssociatedValue;
3922   }
3923 
3924 protected:
3925   /// The type of the original value.
3926   Type *Ty;
3927 
3928   /// Merge \p Other into the currently assumed simplified value
3929   bool unionAssumed(Optional<Value *> Other);
3930 
3931   /// Helper to track validity and fixpoint
3932   BooleanState BS;
3933 
3934   /// An assumed simplified value. Initially, it is set to Optional::None, which
3935   /// means that the value is not clear under current assumption. If in the
3936   /// pessimistic state, getAssumedSimplifiedValue doesn't return this value but
3937   /// returns orignal associated value.
3938   Optional<Value *> SimplifiedAssociatedValue;
3939 };
3940 
3941 /// An abstract interface for value simplify abstract attribute.
3942 struct AAValueSimplify
3943     : public StateWrapper<ValueSimplifyStateType, AbstractAttribute, Type *> {
3944   using Base = StateWrapper<ValueSimplifyStateType, AbstractAttribute, Type *>;
3945   AAValueSimplify(const IRPosition &IRP, Attributor &A)
3946       : Base(IRP, IRP.getAssociatedType()) {}
3947 
3948   /// Create an abstract attribute view for the position \p IRP.
3949   static AAValueSimplify &createForPosition(const IRPosition &IRP,
3950                                             Attributor &A);
3951 
3952   /// See AbstractAttribute::getName()
3953   const std::string getName() const override { return "AAValueSimplify"; }
3954 
3955   /// See AbstractAttribute::getIdAddr()
3956   const char *getIdAddr() const override { return &ID; }
3957 
3958   /// This function should return true if the type of the \p AA is
3959   /// AAValueSimplify
3960   static bool classof(const AbstractAttribute *AA) {
3961     return (AA->getIdAddr() == &ID);
3962   }
3963 
3964   /// Unique ID (due to the unique address)
3965   static const char ID;
3966 
3967 private:
3968   /// Return an assumed simplified value if a single candidate is found. If
3969   /// there cannot be one, return original value. If it is not clear yet, return
3970   /// the Optional::NoneType.
3971   ///
3972   /// Use `Attributor::getAssumedSimplified` for value simplification.
3973   virtual Optional<Value *> getAssumedSimplifiedValue(Attributor &A) const = 0;
3974 
3975   friend struct Attributor;
3976 };
3977 
3978 struct AAHeapToStack : public StateWrapper<BooleanState, AbstractAttribute> {
3979   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3980   AAHeapToStack(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3981 
3982   /// Returns true if HeapToStack conversion is assumed to be possible.
3983   virtual bool isAssumedHeapToStack(const CallBase &CB) const = 0;
3984 
3985   /// Returns true if HeapToStack conversion is assumed and the CB is a
3986   /// callsite to a free operation to be removed.
3987   virtual bool isAssumedHeapToStackRemovedFree(CallBase &CB) const = 0;
3988 
3989   /// Create an abstract attribute view for the position \p IRP.
3990   static AAHeapToStack &createForPosition(const IRPosition &IRP, Attributor &A);
3991 
3992   /// See AbstractAttribute::getName()
3993   const std::string getName() const override { return "AAHeapToStack"; }
3994 
3995   /// See AbstractAttribute::getIdAddr()
3996   const char *getIdAddr() const override { return &ID; }
3997 
3998   /// This function should return true if the type of the \p AA is AAHeapToStack
3999   static bool classof(const AbstractAttribute *AA) {
4000     return (AA->getIdAddr() == &ID);
4001   }
4002 
4003   /// Unique ID (due to the unique address)
4004   static const char ID;
4005 };
4006 
4007 /// An abstract interface for privatizability.
4008 ///
4009 /// A pointer is privatizable if it can be replaced by a new, private one.
4010 /// Privatizing pointer reduces the use count, interaction between unrelated
4011 /// code parts.
4012 ///
4013 /// In order for a pointer to be privatizable its value cannot be observed
4014 /// (=nocapture), it is (for now) not written (=readonly & noalias), we know
4015 /// what values are necessary to make the private copy look like the original
4016 /// one, and the values we need can be loaded (=dereferenceable).
4017 struct AAPrivatizablePtr
4018     : public StateWrapper<BooleanState, AbstractAttribute> {
4019   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4020   AAPrivatizablePtr(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4021 
4022   /// Returns true if pointer privatization is assumed to be possible.
4023   bool isAssumedPrivatizablePtr() const { return getAssumed(); }
4024 
4025   /// Returns true if pointer privatization is known to be possible.
4026   bool isKnownPrivatizablePtr() const { return getKnown(); }
4027 
4028   /// Return the type we can choose for a private copy of the underlying
4029   /// value. None means it is not clear yet, nullptr means there is none.
4030   virtual Optional<Type *> getPrivatizableType() const = 0;
4031 
4032   /// Create an abstract attribute view for the position \p IRP.
4033   static AAPrivatizablePtr &createForPosition(const IRPosition &IRP,
4034                                               Attributor &A);
4035 
4036   /// See AbstractAttribute::getName()
4037   const std::string getName() const override { return "AAPrivatizablePtr"; }
4038 
4039   /// See AbstractAttribute::getIdAddr()
4040   const char *getIdAddr() const override { return &ID; }
4041 
4042   /// This function should return true if the type of the \p AA is
4043   /// AAPricatizablePtr
4044   static bool classof(const AbstractAttribute *AA) {
4045     return (AA->getIdAddr() == &ID);
4046   }
4047 
4048   /// Unique ID (due to the unique address)
4049   static const char ID;
4050 };
4051 
4052 /// An abstract interface for memory access kind related attributes
4053 /// (readnone/readonly/writeonly).
4054 struct AAMemoryBehavior
4055     : public IRAttribute<
4056           Attribute::ReadNone,
4057           StateWrapper<BitIntegerState<uint8_t, 3>, AbstractAttribute>> {
4058   AAMemoryBehavior(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
4059 
4060   /// State encoding bits. A set bit in the state means the property holds.
4061   /// BEST_STATE is the best possible state, 0 the worst possible state.
4062   enum {
4063     NO_READS = 1 << 0,
4064     NO_WRITES = 1 << 1,
4065     NO_ACCESSES = NO_READS | NO_WRITES,
4066 
4067     BEST_STATE = NO_ACCESSES,
4068   };
4069   static_assert(BEST_STATE == getBestState(), "Unexpected BEST_STATE value");
4070 
4071   /// Return true if we know that the underlying value is not read or accessed
4072   /// in its respective scope.
4073   bool isKnownReadNone() const { return isKnown(NO_ACCESSES); }
4074 
4075   /// Return true if we assume that the underlying value is not read or accessed
4076   /// in its respective scope.
4077   bool isAssumedReadNone() const { return isAssumed(NO_ACCESSES); }
4078 
4079   /// Return true if we know that the underlying value is not accessed
4080   /// (=written) in its respective scope.
4081   bool isKnownReadOnly() const { return isKnown(NO_WRITES); }
4082 
4083   /// Return true if we assume that the underlying value is not accessed
4084   /// (=written) in its respective scope.
4085   bool isAssumedReadOnly() const { return isAssumed(NO_WRITES); }
4086 
4087   /// Return true if we know that the underlying value is not read in its
4088   /// respective scope.
4089   bool isKnownWriteOnly() const { return isKnown(NO_READS); }
4090 
4091   /// Return true if we assume that the underlying value is not read in its
4092   /// respective scope.
4093   bool isAssumedWriteOnly() const { return isAssumed(NO_READS); }
4094 
4095   /// Create an abstract attribute view for the position \p IRP.
4096   static AAMemoryBehavior &createForPosition(const IRPosition &IRP,
4097                                              Attributor &A);
4098 
4099   /// See AbstractAttribute::getName()
4100   const std::string getName() const override { return "AAMemoryBehavior"; }
4101 
4102   /// See AbstractAttribute::getIdAddr()
4103   const char *getIdAddr() const override { return &ID; }
4104 
4105   /// This function should return true if the type of the \p AA is
4106   /// AAMemoryBehavior
4107   static bool classof(const AbstractAttribute *AA) {
4108     return (AA->getIdAddr() == &ID);
4109   }
4110 
4111   /// Unique ID (due to the unique address)
4112   static const char ID;
4113 };
4114 
4115 /// An abstract interface for all memory location attributes
4116 /// (readnone/argmemonly/inaccessiblememonly/inaccessibleorargmemonly).
4117 struct AAMemoryLocation
4118     : public IRAttribute<
4119           Attribute::ReadNone,
4120           StateWrapper<BitIntegerState<uint32_t, 511>, AbstractAttribute>> {
4121   using MemoryLocationsKind = StateType::base_t;
4122 
4123   AAMemoryLocation(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
4124 
4125   /// Encoding of different locations that could be accessed by a memory
4126   /// access.
4127   enum {
4128     ALL_LOCATIONS = 0,
4129     NO_LOCAL_MEM = 1 << 0,
4130     NO_CONST_MEM = 1 << 1,
4131     NO_GLOBAL_INTERNAL_MEM = 1 << 2,
4132     NO_GLOBAL_EXTERNAL_MEM = 1 << 3,
4133     NO_GLOBAL_MEM = NO_GLOBAL_INTERNAL_MEM | NO_GLOBAL_EXTERNAL_MEM,
4134     NO_ARGUMENT_MEM = 1 << 4,
4135     NO_INACCESSIBLE_MEM = 1 << 5,
4136     NO_MALLOCED_MEM = 1 << 6,
4137     NO_UNKOWN_MEM = 1 << 7,
4138     NO_LOCATIONS = NO_LOCAL_MEM | NO_CONST_MEM | NO_GLOBAL_INTERNAL_MEM |
4139                    NO_GLOBAL_EXTERNAL_MEM | NO_ARGUMENT_MEM |
4140                    NO_INACCESSIBLE_MEM | NO_MALLOCED_MEM | NO_UNKOWN_MEM,
4141 
4142     // Helper bit to track if we gave up or not.
4143     VALID_STATE = NO_LOCATIONS + 1,
4144 
4145     BEST_STATE = NO_LOCATIONS | VALID_STATE,
4146   };
4147   static_assert(BEST_STATE == getBestState(), "Unexpected BEST_STATE value");
4148 
4149   /// Return true if we know that the associated functions has no observable
4150   /// accesses.
4151   bool isKnownReadNone() const { return isKnown(NO_LOCATIONS); }
4152 
4153   /// Return true if we assume that the associated functions has no observable
4154   /// accesses.
4155   bool isAssumedReadNone() const {
4156     return isAssumed(NO_LOCATIONS) || isAssumedStackOnly();
4157   }
4158 
4159   /// Return true if we know that the associated functions has at most
4160   /// local/stack accesses.
4161   bool isKnowStackOnly() const {
4162     return isKnown(inverseLocation(NO_LOCAL_MEM, true, true));
4163   }
4164 
4165   /// Return true if we assume that the associated functions has at most
4166   /// local/stack accesses.
4167   bool isAssumedStackOnly() const {
4168     return isAssumed(inverseLocation(NO_LOCAL_MEM, true, true));
4169   }
4170 
4171   /// Return true if we know that the underlying value will only access
4172   /// inaccesible memory only (see Attribute::InaccessibleMemOnly).
4173   bool isKnownInaccessibleMemOnly() const {
4174     return isKnown(inverseLocation(NO_INACCESSIBLE_MEM, true, true));
4175   }
4176 
4177   /// Return true if we assume that the underlying value will only access
4178   /// inaccesible memory only (see Attribute::InaccessibleMemOnly).
4179   bool isAssumedInaccessibleMemOnly() const {
4180     return isAssumed(inverseLocation(NO_INACCESSIBLE_MEM, true, true));
4181   }
4182 
4183   /// Return true if we know that the underlying value will only access
4184   /// argument pointees (see Attribute::ArgMemOnly).
4185   bool isKnownArgMemOnly() const {
4186     return isKnown(inverseLocation(NO_ARGUMENT_MEM, true, true));
4187   }
4188 
4189   /// Return true if we assume that the underlying value will only access
4190   /// argument pointees (see Attribute::ArgMemOnly).
4191   bool isAssumedArgMemOnly() const {
4192     return isAssumed(inverseLocation(NO_ARGUMENT_MEM, true, true));
4193   }
4194 
4195   /// Return true if we know that the underlying value will only access
4196   /// inaccesible memory or argument pointees (see
4197   /// Attribute::InaccessibleOrArgMemOnly).
4198   bool isKnownInaccessibleOrArgMemOnly() const {
4199     return isKnown(
4200         inverseLocation(NO_INACCESSIBLE_MEM | NO_ARGUMENT_MEM, true, true));
4201   }
4202 
4203   /// Return true if we assume that the underlying value will only access
4204   /// inaccesible memory or argument pointees (see
4205   /// Attribute::InaccessibleOrArgMemOnly).
4206   bool isAssumedInaccessibleOrArgMemOnly() const {
4207     return isAssumed(
4208         inverseLocation(NO_INACCESSIBLE_MEM | NO_ARGUMENT_MEM, true, true));
4209   }
4210 
4211   /// Return true if the underlying value may access memory through arguement
4212   /// pointers of the associated function, if any.
4213   bool mayAccessArgMem() const { return !isAssumed(NO_ARGUMENT_MEM); }
4214 
4215   /// Return true if only the memory locations specififed by \p MLK are assumed
4216   /// to be accessed by the associated function.
4217   bool isAssumedSpecifiedMemOnly(MemoryLocationsKind MLK) const {
4218     return isAssumed(MLK);
4219   }
4220 
4221   /// Return the locations that are assumed to be not accessed by the associated
4222   /// function, if any.
4223   MemoryLocationsKind getAssumedNotAccessedLocation() const {
4224     return getAssumed();
4225   }
4226 
4227   /// Return the inverse of location \p Loc, thus for NO_XXX the return
4228   /// describes ONLY_XXX. The flags \p AndLocalMem and \p AndConstMem determine
4229   /// if local (=stack) and constant memory are allowed as well. Most of the
4230   /// time we do want them to be included, e.g., argmemonly allows accesses via
4231   /// argument pointers or local or constant memory accesses.
4232   static MemoryLocationsKind
4233   inverseLocation(MemoryLocationsKind Loc, bool AndLocalMem, bool AndConstMem) {
4234     return NO_LOCATIONS & ~(Loc | (AndLocalMem ? NO_LOCAL_MEM : 0) |
4235                             (AndConstMem ? NO_CONST_MEM : 0));
4236   };
4237 
4238   /// Return the locations encoded by \p MLK as a readable string.
4239   static std::string getMemoryLocationsAsStr(MemoryLocationsKind MLK);
4240 
4241   /// Simple enum to distinguish read/write/read-write accesses.
4242   enum AccessKind {
4243     NONE = 0,
4244     READ = 1 << 0,
4245     WRITE = 1 << 1,
4246     READ_WRITE = READ | WRITE,
4247   };
4248 
4249   /// Check \p Pred on all accesses to the memory kinds specified by \p MLK.
4250   ///
4251   /// This method will evaluate \p Pred on all accesses (access instruction +
4252   /// underlying accessed memory pointer) and it will return true if \p Pred
4253   /// holds every time.
4254   virtual bool checkForAllAccessesToMemoryKind(
4255       function_ref<bool(const Instruction *, const Value *, AccessKind,
4256                         MemoryLocationsKind)>
4257           Pred,
4258       MemoryLocationsKind MLK) const = 0;
4259 
4260   /// Create an abstract attribute view for the position \p IRP.
4261   static AAMemoryLocation &createForPosition(const IRPosition &IRP,
4262                                              Attributor &A);
4263 
4264   /// See AbstractState::getAsStr().
4265   const std::string getAsStr() const override {
4266     return getMemoryLocationsAsStr(getAssumedNotAccessedLocation());
4267   }
4268 
4269   /// See AbstractAttribute::getName()
4270   const std::string getName() const override { return "AAMemoryLocation"; }
4271 
4272   /// See AbstractAttribute::getIdAddr()
4273   const char *getIdAddr() const override { return &ID; }
4274 
4275   /// This function should return true if the type of the \p AA is
4276   /// AAMemoryLocation
4277   static bool classof(const AbstractAttribute *AA) {
4278     return (AA->getIdAddr() == &ID);
4279   }
4280 
4281   /// Unique ID (due to the unique address)
4282   static const char ID;
4283 };
4284 
4285 /// An abstract interface for range value analysis.
4286 struct AAValueConstantRange
4287     : public StateWrapper<IntegerRangeState, AbstractAttribute, uint32_t> {
4288   using Base = StateWrapper<IntegerRangeState, AbstractAttribute, uint32_t>;
4289   AAValueConstantRange(const IRPosition &IRP, Attributor &A)
4290       : Base(IRP, IRP.getAssociatedType()->getIntegerBitWidth()) {}
4291 
4292   /// See AbstractAttribute::getState(...).
4293   IntegerRangeState &getState() override { return *this; }
4294   const IntegerRangeState &getState() const override { return *this; }
4295 
4296   /// Create an abstract attribute view for the position \p IRP.
4297   static AAValueConstantRange &createForPosition(const IRPosition &IRP,
4298                                                  Attributor &A);
4299 
4300   /// Return an assumed range for the associated value a program point \p CtxI.
4301   /// If \p I is nullptr, simply return an assumed range.
4302   virtual ConstantRange
4303   getAssumedConstantRange(Attributor &A,
4304                           const Instruction *CtxI = nullptr) const = 0;
4305 
4306   /// Return a known range for the associated value at a program point \p CtxI.
4307   /// If \p I is nullptr, simply return a known range.
4308   virtual ConstantRange
4309   getKnownConstantRange(Attributor &A,
4310                         const Instruction *CtxI = nullptr) const = 0;
4311 
4312   /// Return an assumed constant for the associated value a program point \p
4313   /// CtxI.
4314   Optional<Constant *>
4315   getAssumedConstant(Attributor &A, const Instruction *CtxI = nullptr) const {
4316     ConstantRange RangeV = getAssumedConstantRange(A, CtxI);
4317     if (auto *C = RangeV.getSingleElement()) {
4318       Type *Ty = getAssociatedValue().getType();
4319       return cast_or_null<Constant>(
4320           AA::getWithType(*ConstantInt::get(Ty->getContext(), *C), *Ty));
4321     }
4322     if (RangeV.isEmptySet())
4323       return llvm::None;
4324     return nullptr;
4325   }
4326 
4327   /// See AbstractAttribute::getName()
4328   const std::string getName() const override { return "AAValueConstantRange"; }
4329 
4330   /// See AbstractAttribute::getIdAddr()
4331   const char *getIdAddr() const override { return &ID; }
4332 
4333   /// This function should return true if the type of the \p AA is
4334   /// AAValueConstantRange
4335   static bool classof(const AbstractAttribute *AA) {
4336     return (AA->getIdAddr() == &ID);
4337   }
4338 
4339   /// Unique ID (due to the unique address)
4340   static const char ID;
4341 };
4342 
4343 /// A class for a set state.
4344 /// The assumed boolean state indicates whether the corresponding set is full
4345 /// set or not. If the assumed state is false, this is the worst state. The
4346 /// worst state (invalid state) of set of potential values is when the set
4347 /// contains every possible value (i.e. we cannot in any way limit the value
4348 /// that the target position can take). That never happens naturally, we only
4349 /// force it. As for the conditions under which we force it, see
4350 /// AAPotentialConstantValues.
4351 template <typename MemberTy> struct PotentialValuesState : AbstractState {
4352   using SetTy = SmallSetVector<MemberTy, 8>;
4353 
4354   PotentialValuesState() : IsValidState(true), UndefIsContained(false) {}
4355 
4356   PotentialValuesState(bool IsValid)
4357       : IsValidState(IsValid), UndefIsContained(false) {}
4358 
4359   /// See AbstractState::isValidState(...)
4360   bool isValidState() const override { return IsValidState.isValidState(); }
4361 
4362   /// See AbstractState::isAtFixpoint(...)
4363   bool isAtFixpoint() const override { return IsValidState.isAtFixpoint(); }
4364 
4365   /// See AbstractState::indicatePessimisticFixpoint(...)
4366   ChangeStatus indicatePessimisticFixpoint() override {
4367     return IsValidState.indicatePessimisticFixpoint();
4368   }
4369 
4370   /// See AbstractState::indicateOptimisticFixpoint(...)
4371   ChangeStatus indicateOptimisticFixpoint() override {
4372     return IsValidState.indicateOptimisticFixpoint();
4373   }
4374 
4375   /// Return the assumed state
4376   PotentialValuesState &getAssumed() { return *this; }
4377   const PotentialValuesState &getAssumed() const { return *this; }
4378 
4379   /// Return this set. We should check whether this set is valid or not by
4380   /// isValidState() before calling this function.
4381   const SetTy &getAssumedSet() const {
4382     assert(isValidState() && "This set shoud not be used when it is invalid!");
4383     return Set;
4384   }
4385 
4386   /// Returns whether this state contains an undef value or not.
4387   bool undefIsContained() const {
4388     assert(isValidState() && "This flag shoud not be used when it is invalid!");
4389     return UndefIsContained;
4390   }
4391 
4392   bool operator==(const PotentialValuesState &RHS) const {
4393     if (isValidState() != RHS.isValidState())
4394       return false;
4395     if (!isValidState() && !RHS.isValidState())
4396       return true;
4397     if (undefIsContained() != RHS.undefIsContained())
4398       return false;
4399     return Set == RHS.getAssumedSet();
4400   }
4401 
4402   /// Maximum number of potential values to be tracked.
4403   /// This is set by -attributor-max-potential-values command line option
4404   static unsigned MaxPotentialValues;
4405 
4406   /// Return empty set as the best state of potential values.
4407   static PotentialValuesState getBestState() {
4408     return PotentialValuesState(true);
4409   }
4410 
4411   static PotentialValuesState getBestState(const PotentialValuesState &PVS) {
4412     return getBestState();
4413   }
4414 
4415   /// Return full set as the worst state of potential values.
4416   static PotentialValuesState getWorstState() {
4417     return PotentialValuesState(false);
4418   }
4419 
4420   /// Union assumed set with the passed value.
4421   void unionAssumed(const MemberTy &C) { insert(C); }
4422 
4423   /// Union assumed set with assumed set of the passed state \p PVS.
4424   void unionAssumed(const PotentialValuesState &PVS) { unionWith(PVS); }
4425 
4426   /// Union assumed set with an undef value.
4427   void unionAssumedWithUndef() { unionWithUndef(); }
4428 
4429   /// "Clamp" this state with \p PVS.
4430   PotentialValuesState operator^=(const PotentialValuesState &PVS) {
4431     IsValidState ^= PVS.IsValidState;
4432     unionAssumed(PVS);
4433     return *this;
4434   }
4435 
4436   PotentialValuesState operator&=(const PotentialValuesState &PVS) {
4437     IsValidState &= PVS.IsValidState;
4438     unionAssumed(PVS);
4439     return *this;
4440   }
4441 
4442   bool contains(const MemberTy &V) const {
4443     return !isValidState() ? true : Set.contains(V);
4444   }
4445 
4446 protected:
4447   SetTy &getAssumedSet() {
4448     assert(isValidState() && "This set shoud not be used when it is invalid!");
4449     return Set;
4450   }
4451 
4452 private:
4453   /// Check the size of this set, and invalidate when the size is no
4454   /// less than \p MaxPotentialValues threshold.
4455   void checkAndInvalidate() {
4456     if (Set.size() >= MaxPotentialValues)
4457       indicatePessimisticFixpoint();
4458     else
4459       reduceUndefValue();
4460   }
4461 
4462   /// If this state contains both undef and not undef, we can reduce
4463   /// undef to the not undef value.
4464   void reduceUndefValue() { UndefIsContained = UndefIsContained & Set.empty(); }
4465 
4466   /// Insert an element into this set.
4467   void insert(const MemberTy &C) {
4468     if (!isValidState())
4469       return;
4470     Set.insert(C);
4471     checkAndInvalidate();
4472   }
4473 
4474   /// Take union with R.
4475   void unionWith(const PotentialValuesState &R) {
4476     /// If this is a full set, do nothing.
4477     if (!isValidState())
4478       return;
4479     /// If R is full set, change L to a full set.
4480     if (!R.isValidState()) {
4481       indicatePessimisticFixpoint();
4482       return;
4483     }
4484     for (const MemberTy &C : R.Set)
4485       Set.insert(C);
4486     UndefIsContained |= R.undefIsContained();
4487     checkAndInvalidate();
4488   }
4489 
4490   /// Take union with an undef value.
4491   void unionWithUndef() {
4492     UndefIsContained = true;
4493     reduceUndefValue();
4494   }
4495 
4496   /// Take intersection with R.
4497   void intersectWith(const PotentialValuesState &R) {
4498     /// If R is a full set, do nothing.
4499     if (!R.isValidState())
4500       return;
4501     /// If this is a full set, change this to R.
4502     if (!isValidState()) {
4503       *this = R;
4504       return;
4505     }
4506     SetTy IntersectSet;
4507     for (const MemberTy &C : Set) {
4508       if (R.Set.count(C))
4509         IntersectSet.insert(C);
4510     }
4511     Set = IntersectSet;
4512     UndefIsContained &= R.undefIsContained();
4513     reduceUndefValue();
4514   }
4515 
4516   /// A helper state which indicate whether this state is valid or not.
4517   BooleanState IsValidState;
4518 
4519   /// Container for potential values
4520   SetTy Set;
4521 
4522   /// Flag for undef value
4523   bool UndefIsContained;
4524 };
4525 
4526 using PotentialConstantIntValuesState = PotentialValuesState<APInt>;
4527 using PotentialLLVMValuesState =
4528     PotentialValuesState<std::pair<AA::ValueAndContext, AA::ValueScope>>;
4529 
4530 raw_ostream &operator<<(raw_ostream &OS,
4531                         const PotentialConstantIntValuesState &R);
4532 raw_ostream &operator<<(raw_ostream &OS, const PotentialLLVMValuesState &R);
4533 
4534 /// An abstract interface for potential values analysis.
4535 ///
4536 /// This AA collects potential values for each IR position.
4537 /// An assumed set of potential values is initialized with the empty set (the
4538 /// best state) and it will grow monotonically as we find more potential values
4539 /// for this position.
4540 /// The set might be forced to the worst state, that is, to contain every
4541 /// possible value for this position in 2 cases.
4542 ///   1. We surpassed the \p MaxPotentialValues threshold. This includes the
4543 ///      case that this position is affected (e.g. because of an operation) by a
4544 ///      Value that is in the worst state.
4545 ///   2. We tried to initialize on a Value that we cannot handle (e.g. an
4546 ///      operator we do not currently handle).
4547 ///
4548 /// For non constant integers see AAPotentialValues.
4549 struct AAPotentialConstantValues
4550     : public StateWrapper<PotentialConstantIntValuesState, AbstractAttribute> {
4551   using Base = StateWrapper<PotentialConstantIntValuesState, AbstractAttribute>;
4552   AAPotentialConstantValues(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4553 
4554   /// See AbstractAttribute::getState(...).
4555   PotentialConstantIntValuesState &getState() override { return *this; }
4556   const PotentialConstantIntValuesState &getState() const override {
4557     return *this;
4558   }
4559 
4560   /// Create an abstract attribute view for the position \p IRP.
4561   static AAPotentialConstantValues &createForPosition(const IRPosition &IRP,
4562                                                       Attributor &A);
4563 
4564   /// Return assumed constant for the associated value
4565   Optional<Constant *>
4566   getAssumedConstant(Attributor &A, const Instruction *CtxI = nullptr) const {
4567     if (!isValidState())
4568       return nullptr;
4569     if (getAssumedSet().size() == 1) {
4570       Type *Ty = getAssociatedValue().getType();
4571       return cast_or_null<Constant>(AA::getWithType(
4572           *ConstantInt::get(Ty->getContext(), *(getAssumedSet().begin())),
4573           *Ty));
4574     }
4575     if (getAssumedSet().size() == 0) {
4576       if (undefIsContained())
4577         return UndefValue::get(getAssociatedValue().getType());
4578       return llvm::None;
4579     }
4580 
4581     return nullptr;
4582   }
4583 
4584   /// See AbstractAttribute::getName()
4585   const std::string getName() const override {
4586     return "AAPotentialConstantValues";
4587   }
4588 
4589   /// See AbstractAttribute::getIdAddr()
4590   const char *getIdAddr() const override { return &ID; }
4591 
4592   /// This function should return true if the type of the \p AA is
4593   /// AAPotentialConstantValues
4594   static bool classof(const AbstractAttribute *AA) {
4595     return (AA->getIdAddr() == &ID);
4596   }
4597 
4598   /// Unique ID (due to the unique address)
4599   static const char ID;
4600 };
4601 
4602 struct AAPotentialValues
4603     : public StateWrapper<PotentialLLVMValuesState, AbstractAttribute> {
4604   using Base = StateWrapper<PotentialLLVMValuesState, AbstractAttribute>;
4605   AAPotentialValues(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4606 
4607   /// See AbstractAttribute::getState(...).
4608   PotentialLLVMValuesState &getState() override { return *this; }
4609   const PotentialLLVMValuesState &getState() const override { return *this; }
4610 
4611   /// Create an abstract attribute view for the position \p IRP.
4612   static AAPotentialValues &createForPosition(const IRPosition &IRP,
4613                                               Attributor &A);
4614 
4615   /// Extract the single value in \p Values if any.
4616   static Value *getSingleValue(Attributor &A, const AbstractAttribute &AA,
4617                                const IRPosition &IRP,
4618                                SmallVectorImpl<AA::ValueAndContext> &Values);
4619 
4620   /// See AbstractAttribute::getName()
4621   const std::string getName() const override { return "AAPotentialValues"; }
4622 
4623   /// See AbstractAttribute::getIdAddr()
4624   const char *getIdAddr() const override { return &ID; }
4625 
4626   /// This function should return true if the type of the \p AA is
4627   /// AAPotentialValues
4628   static bool classof(const AbstractAttribute *AA) {
4629     return (AA->getIdAddr() == &ID);
4630   }
4631 
4632   /// Unique ID (due to the unique address)
4633   static const char ID;
4634 
4635 private:
4636   virtual bool
4637   getAssumedSimplifiedValues(Attributor &A,
4638                              SmallVectorImpl<AA::ValueAndContext> &Values,
4639                              AA::ValueScope) const = 0;
4640 
4641   friend struct Attributor;
4642 };
4643 
4644 /// An abstract interface for all noundef attributes.
4645 struct AANoUndef
4646     : public IRAttribute<Attribute::NoUndef,
4647                          StateWrapper<BooleanState, AbstractAttribute>> {
4648   AANoUndef(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
4649 
4650   /// Return true if we assume that the underlying value is noundef.
4651   bool isAssumedNoUndef() const { return getAssumed(); }
4652 
4653   /// Return true if we know that underlying value is noundef.
4654   bool isKnownNoUndef() const { return getKnown(); }
4655 
4656   /// Create an abstract attribute view for the position \p IRP.
4657   static AANoUndef &createForPosition(const IRPosition &IRP, Attributor &A);
4658 
4659   /// See AbstractAttribute::getName()
4660   const std::string getName() const override { return "AANoUndef"; }
4661 
4662   /// See AbstractAttribute::getIdAddr()
4663   const char *getIdAddr() const override { return &ID; }
4664 
4665   /// This function should return true if the type of the \p AA is AANoUndef
4666   static bool classof(const AbstractAttribute *AA) {
4667     return (AA->getIdAddr() == &ID);
4668   }
4669 
4670   /// Unique ID (due to the unique address)
4671   static const char ID;
4672 };
4673 
4674 struct AACallGraphNode;
4675 struct AACallEdges;
4676 
4677 /// An Iterator for call edges, creates AACallEdges attributes in a lazy way.
4678 /// This iterator becomes invalid if the underlying edge list changes.
4679 /// So This shouldn't outlive a iteration of Attributor.
4680 class AACallEdgeIterator
4681     : public iterator_adaptor_base<AACallEdgeIterator,
4682                                    SetVector<Function *>::iterator> {
4683   AACallEdgeIterator(Attributor &A, SetVector<Function *>::iterator Begin)
4684       : iterator_adaptor_base(Begin), A(A) {}
4685 
4686 public:
4687   AACallGraphNode *operator*() const;
4688 
4689 private:
4690   Attributor &A;
4691   friend AACallEdges;
4692   friend AttributorCallGraph;
4693 };
4694 
4695 struct AACallGraphNode {
4696   AACallGraphNode(Attributor &A) : A(A) {}
4697   virtual ~AACallGraphNode() = default;
4698 
4699   virtual AACallEdgeIterator optimisticEdgesBegin() const = 0;
4700   virtual AACallEdgeIterator optimisticEdgesEnd() const = 0;
4701 
4702   /// Iterator range for exploring the call graph.
4703   iterator_range<AACallEdgeIterator> optimisticEdgesRange() const {
4704     return iterator_range<AACallEdgeIterator>(optimisticEdgesBegin(),
4705                                               optimisticEdgesEnd());
4706   }
4707 
4708 protected:
4709   /// Reference to Attributor needed for GraphTraits implementation.
4710   Attributor &A;
4711 };
4712 
4713 /// An abstract state for querying live call edges.
4714 /// This interface uses the Attributor's optimistic liveness
4715 /// information to compute the edges that are alive.
4716 struct AACallEdges : public StateWrapper<BooleanState, AbstractAttribute>,
4717                      AACallGraphNode {
4718   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4719 
4720   AACallEdges(const IRPosition &IRP, Attributor &A)
4721       : Base(IRP), AACallGraphNode(A) {}
4722 
4723   /// Get the optimistic edges.
4724   virtual const SetVector<Function *> &getOptimisticEdges() const = 0;
4725 
4726   /// Is there any call with a unknown callee.
4727   virtual bool hasUnknownCallee() const = 0;
4728 
4729   /// Is there any call with a unknown callee, excluding any inline asm.
4730   virtual bool hasNonAsmUnknownCallee() const = 0;
4731 
4732   /// Iterator for exploring the call graph.
4733   AACallEdgeIterator optimisticEdgesBegin() const override {
4734     return AACallEdgeIterator(A, getOptimisticEdges().begin());
4735   }
4736 
4737   /// Iterator for exploring the call graph.
4738   AACallEdgeIterator optimisticEdgesEnd() const override {
4739     return AACallEdgeIterator(A, getOptimisticEdges().end());
4740   }
4741 
4742   /// Create an abstract attribute view for the position \p IRP.
4743   static AACallEdges &createForPosition(const IRPosition &IRP, Attributor &A);
4744 
4745   /// See AbstractAttribute::getName()
4746   const std::string getName() const override { return "AACallEdges"; }
4747 
4748   /// See AbstractAttribute::getIdAddr()
4749   const char *getIdAddr() const override { return &ID; }
4750 
4751   /// This function should return true if the type of the \p AA is AACallEdges.
4752   static bool classof(const AbstractAttribute *AA) {
4753     return (AA->getIdAddr() == &ID);
4754   }
4755 
4756   /// Unique ID (due to the unique address)
4757   static const char ID;
4758 };
4759 
4760 // Synthetic root node for the Attributor's internal call graph.
4761 struct AttributorCallGraph : public AACallGraphNode {
4762   AttributorCallGraph(Attributor &A) : AACallGraphNode(A) {}
4763   virtual ~AttributorCallGraph() = default;
4764 
4765   AACallEdgeIterator optimisticEdgesBegin() const override {
4766     return AACallEdgeIterator(A, A.Functions.begin());
4767   }
4768 
4769   AACallEdgeIterator optimisticEdgesEnd() const override {
4770     return AACallEdgeIterator(A, A.Functions.end());
4771   }
4772 
4773   /// Force populate the entire call graph.
4774   void populateAll() const {
4775     for (const AACallGraphNode *AA : optimisticEdgesRange()) {
4776       // Nothing else to do here.
4777       (void)AA;
4778     }
4779   }
4780 
4781   void print();
4782 };
4783 
4784 template <> struct GraphTraits<AACallGraphNode *> {
4785   using NodeRef = AACallGraphNode *;
4786   using ChildIteratorType = AACallEdgeIterator;
4787 
4788   static AACallEdgeIterator child_begin(AACallGraphNode *Node) {
4789     return Node->optimisticEdgesBegin();
4790   }
4791 
4792   static AACallEdgeIterator child_end(AACallGraphNode *Node) {
4793     return Node->optimisticEdgesEnd();
4794   }
4795 };
4796 
4797 template <>
4798 struct GraphTraits<AttributorCallGraph *>
4799     : public GraphTraits<AACallGraphNode *> {
4800   using nodes_iterator = AACallEdgeIterator;
4801 
4802   static AACallGraphNode *getEntryNode(AttributorCallGraph *G) {
4803     return static_cast<AACallGraphNode *>(G);
4804   }
4805 
4806   static AACallEdgeIterator nodes_begin(const AttributorCallGraph *G) {
4807     return G->optimisticEdgesBegin();
4808   }
4809 
4810   static AACallEdgeIterator nodes_end(const AttributorCallGraph *G) {
4811     return G->optimisticEdgesEnd();
4812   }
4813 };
4814 
4815 template <>
4816 struct DOTGraphTraits<AttributorCallGraph *> : public DefaultDOTGraphTraits {
4817   DOTGraphTraits(bool Simple = false) : DefaultDOTGraphTraits(Simple) {}
4818 
4819   std::string getNodeLabel(const AACallGraphNode *Node,
4820                            const AttributorCallGraph *Graph) {
4821     const AACallEdges *AACE = static_cast<const AACallEdges *>(Node);
4822     return AACE->getAssociatedFunction()->getName().str();
4823   }
4824 
4825   static bool isNodeHidden(const AACallGraphNode *Node,
4826                            const AttributorCallGraph *Graph) {
4827     // Hide the synth root.
4828     return static_cast<const AACallGraphNode *>(Graph) == Node;
4829   }
4830 };
4831 
4832 struct AAExecutionDomain
4833     : public StateWrapper<BooleanState, AbstractAttribute> {
4834   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4835   AAExecutionDomain(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4836 
4837   /// Create an abstract attribute view for the position \p IRP.
4838   static AAExecutionDomain &createForPosition(const IRPosition &IRP,
4839                                               Attributor &A);
4840 
4841   /// See AbstractAttribute::getName().
4842   const std::string getName() const override { return "AAExecutionDomain"; }
4843 
4844   /// See AbstractAttribute::getIdAddr().
4845   const char *getIdAddr() const override { return &ID; }
4846 
4847   /// Check if an instruction is executed only by the initial thread.
4848   virtual bool isExecutedByInitialThreadOnly(const Instruction &) const = 0;
4849 
4850   /// Check if a basic block is executed only by the initial thread.
4851   virtual bool isExecutedByInitialThreadOnly(const BasicBlock &) const = 0;
4852 
4853   /// This function should return true if the type of the \p AA is
4854   /// AAExecutionDomain.
4855   static bool classof(const AbstractAttribute *AA) {
4856     return (AA->getIdAddr() == &ID);
4857   }
4858 
4859   /// Unique ID (due to the unique address)
4860   static const char ID;
4861 };
4862 
4863 /// An abstract Attribute for computing reachability between functions.
4864 struct AAFunctionReachability
4865     : public StateWrapper<BooleanState, AbstractAttribute> {
4866   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4867 
4868   AAFunctionReachability(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4869 
4870   /// See AbstractAttribute::isQueryAA.
4871   bool isQueryAA() const override { return true; }
4872 
4873   /// If the function represented by this possition can reach \p Fn.
4874   virtual bool canReach(Attributor &A, const Function &Fn) const = 0;
4875 
4876   /// Can \p CB reach \p Fn.
4877   virtual bool canReach(Attributor &A, CallBase &CB,
4878                         const Function &Fn) const = 0;
4879 
4880   /// Can  \p Inst reach \p Fn.
4881   /// See also AA::isPotentiallyReachable.
4882   virtual bool instructionCanReach(Attributor &A, const Instruction &Inst,
4883                                    const Function &Fn) const = 0;
4884 
4885   /// Create an abstract attribute view for the position \p IRP.
4886   static AAFunctionReachability &createForPosition(const IRPosition &IRP,
4887                                                    Attributor &A);
4888 
4889   /// See AbstractAttribute::getName()
4890   const std::string getName() const override {
4891     return "AAFunctionReachability";
4892   }
4893 
4894   /// See AbstractAttribute::getIdAddr()
4895   const char *getIdAddr() const override { return &ID; }
4896 
4897   /// This function should return true if the type of the \p AA is AACallEdges.
4898   static bool classof(const AbstractAttribute *AA) {
4899     return (AA->getIdAddr() == &ID);
4900   }
4901 
4902   /// Unique ID (due to the unique address)
4903   static const char ID;
4904 
4905 private:
4906   /// Can this function reach a call with unknown calee.
4907   virtual bool canReachUnknownCallee() const = 0;
4908 };
4909 
4910 /// An abstract interface for struct information.
4911 struct AAPointerInfo : public AbstractAttribute {
4912   AAPointerInfo(const IRPosition &IRP) : AbstractAttribute(IRP) {}
4913 
4914   enum AccessKind {
4915     // First two bits to distinguish may and must accesses
4916     AK_MUST = 1 << 0,
4917     AK_MAY = 1 << 1,
4918 
4919     // Then two bits for read and write. These are not exclusive.
4920     AK_R = 1 << 2,
4921     AK_W = 1 << 3,
4922     AK_RW = AK_R | AK_W,
4923 
4924     // Helper for easy access.
4925     AK_MAY_READ = AK_MAY | AK_R,
4926     AK_MAY_WRITE = AK_MAY | AK_W,
4927     AK_MAY_READ_WRITE = AK_MAY | AK_R | AK_W,
4928     AK_MUST_READ = AK_MUST | AK_R,
4929     AK_MUST_WRITE = AK_MUST | AK_W,
4930     AK_MUST_READ_WRITE = AK_MUST | AK_R | AK_W,
4931   };
4932 
4933   /// An access description.
4934   struct Access {
4935     Access(Instruction *I, Optional<Value *> Content, AccessKind Kind, Type *Ty)
4936         : LocalI(I), RemoteI(I), Content(Content), Kind(Kind), Ty(Ty) {
4937       verify();
4938     }
4939     Access(Instruction *LocalI, Instruction *RemoteI, Optional<Value *> Content,
4940            AccessKind Kind, Type *Ty)
4941         : LocalI(LocalI), RemoteI(RemoteI), Content(Content), Kind(Kind),
4942           Ty(Ty) {
4943       verify();
4944     }
4945     Access(const Access &Other) = default;
4946     Access(const Access &&Other)
4947         : LocalI(Other.LocalI), RemoteI(Other.RemoteI), Content(Other.Content),
4948           Kind(Other.Kind), Ty(Other.Ty) {}
4949 
4950     Access &operator=(const Access &Other) = default;
4951     bool operator==(const Access &R) const {
4952       return LocalI == R.LocalI && RemoteI == R.RemoteI &&
4953              Content == R.Content && Kind == R.Kind;
4954     }
4955     bool operator!=(const Access &R) const { return !(*this == R); }
4956 
4957     Access &operator&=(const Access &R) {
4958       assert(RemoteI == R.RemoteI && "Expected same instruction!");
4959       Content =
4960           AA::combineOptionalValuesInAAValueLatice(Content, R.Content, Ty);
4961       Kind = AccessKind(Kind | R.Kind);
4962       return *this;
4963     }
4964 
4965     void verify() {
4966       assert(isMustAccess() + isMayAccess() == 1 &&
4967              "Expect must or may access, not both.");
4968     }
4969 
4970     /// Return the access kind.
4971     AccessKind getKind() const { return Kind; }
4972 
4973     /// Return true if this is a read access.
4974     bool isRead() const { return Kind & AK_R; }
4975 
4976     /// Return true if this is a write access.
4977     bool isWrite() const { return Kind & AK_W; }
4978 
4979     bool isMustAccess() const { return Kind & AK_MUST; }
4980     bool isMayAccess() const { return Kind & AK_MAY; }
4981 
4982     /// Return the instruction that causes the access with respect to the local
4983     /// scope of the associated attribute.
4984     Instruction *getLocalInst() const { return LocalI; }
4985 
4986     /// Return the actual instruction that causes the access.
4987     Instruction *getRemoteInst() const { return RemoteI; }
4988 
4989     /// Return true if the value written is not known yet.
4990     bool isWrittenValueYetUndetermined() const { return !Content; }
4991 
4992     /// Return true if the value written cannot be determined at all.
4993     bool isWrittenValueUnknown() const {
4994       return Content.has_value() && !*Content;
4995     }
4996 
4997     /// Return the type associated with the access, if known.
4998     Type *getType() const { return Ty; }
4999 
5000     /// Return the value writen, if any. As long as
5001     /// isWrittenValueYetUndetermined return true this function shall not be
5002     /// called.
5003     Value *getWrittenValue() const { return *Content; }
5004 
5005     /// Return the written value which can be `llvm::null` if it is not yet
5006     /// determined.
5007     Optional<Value *> getContent() const { return Content; }
5008 
5009   private:
5010     /// The instruction responsible for the access with respect to the local
5011     /// scope of the associated attribute.
5012     Instruction *LocalI;
5013 
5014     /// The instruction responsible for the access.
5015     Instruction *RemoteI;
5016 
5017     /// The value written, if any. `llvm::none` means "not known yet", `nullptr`
5018     /// cannot be determined.
5019     Optional<Value *> Content;
5020 
5021     /// The access kind, e.g., READ, as bitset (could be more than one).
5022     AccessKind Kind;
5023 
5024     /// The type of the content, thus the type read/written, can be null if not
5025     /// available.
5026     Type *Ty;
5027   };
5028 
5029   /// Create an abstract attribute view for the position \p IRP.
5030   static AAPointerInfo &createForPosition(const IRPosition &IRP, Attributor &A);
5031 
5032   /// See AbstractAttribute::getName()
5033   const std::string getName() const override { return "AAPointerInfo"; }
5034 
5035   /// See AbstractAttribute::getIdAddr()
5036   const char *getIdAddr() const override { return &ID; }
5037 
5038   /// Helper to represent an access offset and size, with logic to deal with
5039   /// uncertainty and check for overlapping accesses.
5040   struct OffsetAndSize : public std::pair<int64_t, int64_t> {
5041     using BaseTy = std::pair<int64_t, int64_t>;
5042     OffsetAndSize(int64_t Offset, int64_t Size) : BaseTy(Offset, Size) {}
5043     OffsetAndSize(const BaseTy &P) : BaseTy(P) {}
5044     int64_t getOffset() const { return first; }
5045     int64_t getSize() const { return second; }
5046     static OffsetAndSize getUnknown() {
5047       return OffsetAndSize(Unknown, Unknown);
5048     }
5049 
5050     /// Return true if offset or size are unknown.
5051     bool offsetOrSizeAreUnknown() const {
5052       return getOffset() == OffsetAndSize::Unknown ||
5053              getSize() == OffsetAndSize::Unknown;
5054     }
5055 
5056     /// Return true if this offset and size pair might describe an address that
5057     /// overlaps with \p OAS.
5058     bool mayOverlap(const OffsetAndSize &OAS) const {
5059       // Any unknown value and we are giving up -> overlap.
5060       if (offsetOrSizeAreUnknown() || OAS.offsetOrSizeAreUnknown())
5061         return true;
5062 
5063       // Check if one offset point is in the other interval [offset,
5064       // offset+size].
5065       return OAS.getOffset() + OAS.getSize() > getOffset() &&
5066              OAS.getOffset() < getOffset() + getSize();
5067     }
5068 
5069     /// Constant used to represent unknown offset or sizes.
5070     static constexpr int64_t Unknown = 1 << 31;
5071   };
5072 
5073   /// Call \p CB on all accesses that might interfere with \p OAS and return
5074   /// true if all such accesses were known and the callback returned true for
5075   /// all of them, false otherwise. An access interferes with an offset-size
5076   /// pair if it might read or write that memory region.
5077   virtual bool forallInterferingAccesses(
5078       OffsetAndSize OAS, function_ref<bool(const Access &, bool)> CB) const = 0;
5079 
5080   /// Call \p CB on all accesses that might interfere with \p I and
5081   /// return true if all such accesses were known and the callback returned true
5082   /// for all of them, false otherwise. In contrast to forallInterferingAccesses
5083   /// this function will perform reasoning to exclude write accesses that cannot
5084   /// affect the load even if they on the surface look as if they would. The
5085   /// flag \p HasBeenWrittenTo will be set to true if we know that \p I does not
5086   /// read the intial value of the underlying memory.
5087   virtual bool
5088   forallInterferingAccesses(Attributor &A, const AbstractAttribute &QueryingAA,
5089                             Instruction &I,
5090                             function_ref<bool(const Access &, bool)> CB,
5091                             bool &HasBeenWrittenTo) const = 0;
5092 
5093   /// This function should return true if the type of the \p AA is AAPointerInfo
5094   static bool classof(const AbstractAttribute *AA) {
5095     return (AA->getIdAddr() == &ID);
5096   }
5097 
5098   /// Unique ID (due to the unique address)
5099   static const char ID;
5100 };
5101 
5102 /// An abstract attribute for getting assumption information.
5103 struct AAAssumptionInfo
5104     : public StateWrapper<SetState<StringRef>, AbstractAttribute,
5105                           DenseSet<StringRef>> {
5106   using Base =
5107       StateWrapper<SetState<StringRef>, AbstractAttribute, DenseSet<StringRef>>;
5108 
5109   AAAssumptionInfo(const IRPosition &IRP, Attributor &A,
5110                    const DenseSet<StringRef> &Known)
5111       : Base(IRP, Known) {}
5112 
5113   /// Returns true if the assumption set contains the assumption \p Assumption.
5114   virtual bool hasAssumption(const StringRef Assumption) const = 0;
5115 
5116   /// Create an abstract attribute view for the position \p IRP.
5117   static AAAssumptionInfo &createForPosition(const IRPosition &IRP,
5118                                              Attributor &A);
5119 
5120   /// See AbstractAttribute::getName()
5121   const std::string getName() const override { return "AAAssumptionInfo"; }
5122 
5123   /// See AbstractAttribute::getIdAddr()
5124   const char *getIdAddr() const override { return &ID; }
5125 
5126   /// This function should return true if the type of the \p AA is
5127   /// AAAssumptionInfo
5128   static bool classof(const AbstractAttribute *AA) {
5129     return (AA->getIdAddr() == &ID);
5130   }
5131 
5132   /// Unique ID (due to the unique address)
5133   static const char ID;
5134 };
5135 
5136 raw_ostream &operator<<(raw_ostream &, const AAPointerInfo::Access &);
5137 
5138 /// Run options, used by the pass manager.
5139 enum AttributorRunOption {
5140   NONE = 0,
5141   MODULE = 1 << 0,
5142   CGSCC = 1 << 1,
5143   ALL = MODULE | CGSCC
5144 };
5145 
5146 } // end namespace llvm
5147 
5148 #endif // LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
5149