1 //===- Attributor.h --- Module-wide attribute deduction ---------*- C++ -*-===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // Attributor: An inter procedural (abstract) "attribute" deduction framework.
10 //
11 // The Attributor framework is an inter procedural abstract analysis (fixpoint
12 // iteration analysis). The goal is to allow easy deduction of new attributes as
13 // well as information exchange between abstract attributes in-flight.
14 //
15 // The Attributor class is the driver and the link between the various abstract
16 // attributes. The Attributor will iterate until a fixpoint state is reached by
17 // all abstract attributes in-flight, or until it will enforce a pessimistic fix
18 // point because an iteration limit is reached.
19 //
20 // Abstract attributes, derived from the AbstractAttribute class, actually
21 // describe properties of the code. They can correspond to actual LLVM-IR
22 // attributes, or they can be more general, ultimately unrelated to LLVM-IR
23 // attributes. The latter is useful when an abstract attributes provides
24 // information to other abstract attributes in-flight but we might not want to
25 // manifest the information. The Attributor allows to query in-flight abstract
26 // attributes through the `Attributor::getAAFor` method (see the method
27 // description for an example). If the method is used by an abstract attribute
28 // P, and it results in an abstract attribute Q, the Attributor will
29 // automatically capture a potential dependence from Q to P. This dependence
30 // will cause P to be reevaluated whenever Q changes in the future.
31 //
32 // The Attributor will only reevaluate abstract attributes that might have
33 // changed since the last iteration. That means that the Attribute will not
34 // revisit all instructions/blocks/functions in the module but only query
35 // an update from a subset of the abstract attributes.
36 //
37 // The update method `AbstractAttribute::updateImpl` is implemented by the
38 // specific "abstract attribute" subclasses. The method is invoked whenever the
39 // currently assumed state (see the AbstractState class) might not be valid
40 // anymore. This can, for example, happen if the state was dependent on another
41 // abstract attribute that changed. In every invocation, the update method has
42 // to adjust the internal state of an abstract attribute to a point that is
43 // justifiable by the underlying IR and the current state of abstract attributes
44 // in-flight. Since the IR is given and assumed to be valid, the information
45 // derived from it can be assumed to hold. However, information derived from
46 // other abstract attributes is conditional on various things. If the justifying
47 // state changed, the `updateImpl` has to revisit the situation and potentially
48 // find another justification or limit the optimistic assumes made.
49 //
50 // Change is the key in this framework. Until a state of no-change, thus a
51 // fixpoint, is reached, the Attributor will query the abstract attributes
52 // in-flight to re-evaluate their state. If the (current) state is too
53 // optimistic, hence it cannot be justified anymore through other abstract
54 // attributes or the state of the IR, the state of the abstract attribute will
55 // have to change. Generally, we assume abstract attribute state to be a finite
56 // height lattice and the update function to be monotone. However, these
57 // conditions are not enforced because the iteration limit will guarantee
58 // termination. If an optimistic fixpoint is reached, or a pessimistic fix
59 // point is enforced after a timeout, the abstract attributes are tasked to
60 // manifest their result in the IR for passes to come.
61 //
62 // Attribute manifestation is not mandatory. If desired, there is support to
63 // generate a single or multiple LLVM-IR attributes already in the helper struct
64 // IRAttribute. In the simplest case, a subclass inherits from IRAttribute with
65 // a proper Attribute::AttrKind as template parameter. The Attributor
66 // manifestation framework will then create and place a new attribute if it is
67 // allowed to do so (based on the abstract state). Other use cases can be
68 // achieved by overloading AbstractAttribute or IRAttribute methods.
69 //
70 //
71 // The "mechanics" of adding a new "abstract attribute":
72 // - Define a class (transitively) inheriting from AbstractAttribute and one
73 //   (which could be the same) that (transitively) inherits from AbstractState.
74 //   For the latter, consider the already available BooleanState and
75 //   {Inc,Dec,Bit}IntegerState if they fit your needs, e.g., you require only a
76 //   number tracking or bit-encoding.
77 // - Implement all pure methods. Also use overloading if the attribute is not
78 //   conforming with the "default" behavior: A (set of) LLVM-IR attribute(s) for
79 //   an argument, call site argument, function return value, or function. See
80 //   the class and method descriptions for more information on the two
81 //   "Abstract" classes and their respective methods.
82 // - Register opportunities for the new abstract attribute in the
83 //   `Attributor::identifyDefaultAbstractAttributes` method if it should be
84 //   counted as a 'default' attribute.
85 // - Add sufficient tests.
86 // - Add a Statistics object for bookkeeping. If it is a simple (set of)
87 //   attribute(s) manifested through the Attributor manifestation framework, see
88 //   the bookkeeping function in Attributor.cpp.
89 // - If instructions with a certain opcode are interesting to the attribute, add
90 //   that opcode to the switch in `Attributor::identifyAbstractAttributes`. This
91 //   will make it possible to query all those instructions through the
92 //   `InformationCache::getOpcodeInstMapForFunction` interface and eliminate the
93 //   need to traverse the IR repeatedly.
94 //
95 //===----------------------------------------------------------------------===//
96 
97 #ifndef LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
98 #define LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
99 
100 #include "llvm/ADT/DenseSet.h"
101 #include "llvm/ADT/GraphTraits.h"
102 #include "llvm/ADT/MapVector.h"
103 #include "llvm/ADT/STLExtras.h"
104 #include "llvm/ADT/SetOperations.h"
105 #include "llvm/ADT/SetVector.h"
106 #include "llvm/ADT/Triple.h"
107 #include "llvm/ADT/iterator.h"
108 #include "llvm/Analysis/AssumeBundleQueries.h"
109 #include "llvm/Analysis/CFG.h"
110 #include "llvm/Analysis/CGSCCPassManager.h"
111 #include "llvm/Analysis/LazyCallGraph.h"
112 #include "llvm/Analysis/LoopInfo.h"
113 #include "llvm/Analysis/MemoryLocation.h"
114 #include "llvm/Analysis/MustExecute.h"
115 #include "llvm/Analysis/OptimizationRemarkEmitter.h"
116 #include "llvm/Analysis/PostDominators.h"
117 #include "llvm/Analysis/TargetLibraryInfo.h"
118 #include "llvm/IR/AbstractCallSite.h"
119 #include "llvm/IR/ConstantRange.h"
120 #include "llvm/IR/Constants.h"
121 #include "llvm/IR/InstIterator.h"
122 #include "llvm/IR/Instruction.h"
123 #include "llvm/IR/PassManager.h"
124 #include "llvm/IR/Value.h"
125 #include "llvm/Support/Alignment.h"
126 #include "llvm/Support/Allocator.h"
127 #include "llvm/Support/Casting.h"
128 #include "llvm/Support/DOTGraphTraits.h"
129 #include "llvm/Support/TimeProfiler.h"
130 #include "llvm/Transforms/Utils/CallGraphUpdater.h"
131 
132 #include <limits>
133 #include <map>
134 #include <optional>
135 
136 namespace llvm {
137 
138 class DataLayout;
139 class LLVMContext;
140 class Pass;
141 template <typename Fn> class function_ref;
142 struct AADepGraphNode;
143 struct AADepGraph;
144 struct Attributor;
145 struct AbstractAttribute;
146 struct InformationCache;
147 struct AAIsDead;
148 struct AttributorCallGraph;
149 struct IRPosition;
150 
151 class AAResults;
152 class Function;
153 
154 /// Abstract Attribute helper functions.
155 namespace AA {
156 using InstExclusionSetTy = SmallPtrSet<Instruction *, 4>;
157 
158 enum class GPUAddressSpace : unsigned {
159   Generic = 0,
160   Global = 1,
161   Shared = 3,
162   Constant = 4,
163   Local = 5,
164 };
165 
166 /// Flags to distinguish intra-procedural queries from *potentially*
167 /// inter-procedural queries. Not that information can be valid for both and
168 /// therefore both bits might be set.
169 enum ValueScope : uint8_t {
170   Intraprocedural = 1,
171   Interprocedural = 2,
172   AnyScope = Intraprocedural | Interprocedural,
173 };
174 
175 struct ValueAndContext : public std::pair<Value *, const Instruction *> {
176   using Base = std::pair<Value *, const Instruction *>;
177   ValueAndContext(const Base &B) : Base(B) {}
178   ValueAndContext(Value &V, const Instruction *CtxI) : Base(&V, CtxI) {}
179   ValueAndContext(Value &V, const Instruction &CtxI) : Base(&V, &CtxI) {}
180 
181   Value *getValue() const { return this->first; }
182   const Instruction *getCtxI() const { return this->second; }
183 };
184 
185 /// Return true if \p I is a `nosync` instruction. Use generic reasoning and
186 /// potentially the corresponding AANoSync.
187 bool isNoSyncInst(Attributor &A, const Instruction &I,
188                   const AbstractAttribute &QueryingAA);
189 
190 /// Return true if \p V is dynamically unique, that is, there are no two
191 /// "instances" of \p V at runtime with different values.
192 /// Note: If \p ForAnalysisOnly is set we only check that the Attributor will
193 /// never use \p V to represent two "instances" not that \p V could not
194 /// technically represent them.
195 bool isDynamicallyUnique(Attributor &A, const AbstractAttribute &QueryingAA,
196                          const Value &V, bool ForAnalysisOnly = true);
197 
198 /// Return true if \p V is a valid value in \p Scope, that is a constant or an
199 /// instruction/argument of \p Scope.
200 bool isValidInScope(const Value &V, const Function *Scope);
201 
202 /// Return true if the value of \p VAC is a valid at the position of \p VAC,
203 /// that is a constant, an argument of the same function, or an instruction in
204 /// that function that dominates the position.
205 bool isValidAtPosition(const ValueAndContext &VAC, InformationCache &InfoCache);
206 
207 /// Try to convert \p V to type \p Ty without introducing new instructions. If
208 /// this is not possible return `nullptr`. Note: this function basically knows
209 /// how to cast various constants.
210 Value *getWithType(Value &V, Type &Ty);
211 
212 /// Return the combination of \p A and \p B such that the result is a possible
213 /// value of both. \p B is potentially casted to match the type \p Ty or the
214 /// type of \p A if \p Ty is null.
215 ///
216 /// Examples:
217 ///        X + none  => X
218 /// not_none + undef => not_none
219 ///          V1 + V2 => nullptr
220 std::optional<Value *>
221 combineOptionalValuesInAAValueLatice(const std::optional<Value *> &A,
222                                      const std::optional<Value *> &B, Type *Ty);
223 
224 /// Helper to represent an access offset and size, with logic to deal with
225 /// uncertainty and check for overlapping accesses.
226 struct RangeTy {
227   int64_t Offset = Unassigned;
228   int64_t Size = Unassigned;
229 
230   RangeTy(int64_t Offset, int64_t Size) : Offset(Offset), Size(Size) {}
231   RangeTy() = default;
232   static RangeTy getUnknown() { return RangeTy{Unknown, Unknown}; }
233 
234   /// Return true if offset or size are unknown.
235   bool offsetOrSizeAreUnknown() const {
236     return Offset == RangeTy::Unknown || Size == RangeTy::Unknown;
237   }
238 
239   /// Return true if offset and size are unknown, thus this is the default
240   /// unknown object.
241   bool offsetAndSizeAreUnknown() const {
242     return Offset == RangeTy::Unknown && Size == RangeTy::Unknown;
243   }
244 
245   /// Return true if the offset and size are unassigned.
246   bool isUnassigned() const {
247     assert((Offset == RangeTy::Unassigned) == (Size == RangeTy::Unassigned) &&
248            "Inconsistent state!");
249     return Offset == RangeTy::Unassigned;
250   }
251 
252   /// Return true if this offset and size pair might describe an address that
253   /// overlaps with \p Range.
254   bool mayOverlap(const RangeTy &Range) const {
255     // Any unknown value and we are giving up -> overlap.
256     if (offsetOrSizeAreUnknown() || Range.offsetOrSizeAreUnknown())
257       return true;
258 
259     // Check if one offset point is in the other interval [offset,
260     // offset+size].
261     return Range.Offset + Range.Size > Offset && Range.Offset < Offset + Size;
262   }
263 
264   RangeTy &operator&=(const RangeTy &R) {
265     if (Offset == Unassigned)
266       Offset = R.Offset;
267     else if (R.Offset != Unassigned && R.Offset != Offset)
268       Offset = Unknown;
269 
270     if (Size == Unassigned)
271       Size = R.Size;
272     else if (Size == Unknown || R.Size == Unknown)
273       Size = Unknown;
274     else if (R.Size != Unassigned)
275       Size = std::max(Size, R.Size);
276 
277     return *this;
278   }
279 
280   /// Comparison for sorting ranges by offset.
281   ///
282   /// Returns true if the offset \p L is less than that of \p R.
283   inline static bool OffsetLessThan(const RangeTy &L, const RangeTy &R) {
284     return L.Offset < R.Offset;
285   }
286 
287   /// Constants used to represent special offsets or sizes.
288   /// - We cannot assume that Offsets and Size are non-negative.
289   /// - The constants should not clash with DenseMapInfo, such as EmptyKey
290   ///   (INT64_MAX) and TombstoneKey (INT64_MIN).
291   /// We use values "in the middle" of the 64 bit range to represent these
292   /// special cases.
293   static constexpr int64_t Unassigned = std::numeric_limits<int32_t>::min();
294   static constexpr int64_t Unknown = std::numeric_limits<int32_t>::max();
295 };
296 
297 inline raw_ostream &operator<<(raw_ostream &OS, const RangeTy &R) {
298   OS << "[" << R.Offset << ", " << R.Size << "]";
299   return OS;
300 }
301 
302 inline bool operator==(const RangeTy &A, const RangeTy &B) {
303   return A.Offset == B.Offset && A.Size == B.Size;
304 }
305 
306 inline bool operator!=(const RangeTy &A, const RangeTy &B) { return !(A == B); }
307 
308 /// Return the initial value of \p Obj with type \p Ty if that is a constant.
309 Constant *getInitialValueForObj(Value &Obj, Type &Ty,
310                                 const TargetLibraryInfo *TLI,
311                                 const DataLayout &DL,
312                                 RangeTy *RangePtr = nullptr);
313 
314 /// Collect all potential values \p LI could read into \p PotentialValues. That
315 /// is, the only values read by \p LI are assumed to be known and all are in
316 /// \p PotentialValues. \p PotentialValueOrigins will contain all the
317 /// instructions that might have put a potential value into \p PotentialValues.
318 /// Dependences onto \p QueryingAA are properly tracked, \p
319 /// UsedAssumedInformation will inform the caller if assumed information was
320 /// used.
321 ///
322 /// \returns True if the assumed potential copies are all in \p PotentialValues,
323 ///          false if something went wrong and the copies could not be
324 ///          determined.
325 bool getPotentiallyLoadedValues(
326     Attributor &A, LoadInst &LI, SmallSetVector<Value *, 4> &PotentialValues,
327     SmallSetVector<Instruction *, 4> &PotentialValueOrigins,
328     const AbstractAttribute &QueryingAA, bool &UsedAssumedInformation,
329     bool OnlyExact = false);
330 
331 /// Collect all potential values of the one stored by \p SI into
332 /// \p PotentialCopies. That is, the only copies that were made via the
333 /// store are assumed to be known and all are in \p PotentialCopies. Dependences
334 /// onto \p QueryingAA are properly tracked, \p UsedAssumedInformation will
335 /// inform the caller if assumed information was used.
336 ///
337 /// \returns True if the assumed potential copies are all in \p PotentialCopies,
338 ///          false if something went wrong and the copies could not be
339 ///          determined.
340 bool getPotentialCopiesOfStoredValue(
341     Attributor &A, StoreInst &SI, SmallSetVector<Value *, 4> &PotentialCopies,
342     const AbstractAttribute &QueryingAA, bool &UsedAssumedInformation,
343     bool OnlyExact = false);
344 
345 /// Return true if \p IRP is readonly. This will query respective AAs that
346 /// deduce the information and introduce dependences for \p QueryingAA.
347 bool isAssumedReadOnly(Attributor &A, const IRPosition &IRP,
348                        const AbstractAttribute &QueryingAA, bool &IsKnown);
349 
350 /// Return true if \p IRP is readnone. This will query respective AAs that
351 /// deduce the information and introduce dependences for \p QueryingAA.
352 bool isAssumedReadNone(Attributor &A, const IRPosition &IRP,
353                        const AbstractAttribute &QueryingAA, bool &IsKnown);
354 
355 /// Return true if \p ToI is potentially reachable from \p FromI without running
356 /// into any instruction in \p ExclusionSet The two instructions do not need to
357 /// be in the same function. \p GoBackwardsCB can be provided to convey domain
358 /// knowledge about the "lifespan" the user is interested in. By default, the
359 /// callers of \p FromI are checked as well to determine if \p ToI can be
360 /// reached. If the query is not interested in callers beyond a certain point,
361 /// e.g., a GPU kernel entry or the function containing an alloca, the
362 /// \p GoBackwardsCB should return false.
363 bool isPotentiallyReachable(
364     Attributor &A, const Instruction &FromI, const Instruction &ToI,
365     const AbstractAttribute &QueryingAA,
366     const AA::InstExclusionSetTy *ExclusionSet = nullptr,
367     std::function<bool(const Function &F)> GoBackwardsCB = nullptr);
368 
369 /// Same as above but it is sufficient to reach any instruction in \p ToFn.
370 bool isPotentiallyReachable(
371     Attributor &A, const Instruction &FromI, const Function &ToFn,
372     const AbstractAttribute &QueryingAA,
373     const AA::InstExclusionSetTy *ExclusionSet = nullptr,
374     std::function<bool(const Function &F)> GoBackwardsCB = nullptr);
375 
376 /// Return true if \p Obj is assumed to be a thread local object.
377 bool isAssumedThreadLocalObject(Attributor &A, Value &Obj,
378                                 const AbstractAttribute &QueryingAA);
379 
380 /// Return true if \p I is potentially affected by a barrier.
381 bool isPotentiallyAffectedByBarrier(Attributor &A, const Instruction &I,
382                                     const AbstractAttribute &QueryingAA);
383 bool isPotentiallyAffectedByBarrier(Attributor &A, ArrayRef<const Value *> Ptrs,
384                                     const AbstractAttribute &QueryingAA,
385                                     const Instruction *CtxI);
386 } // namespace AA
387 
388 template <>
389 struct DenseMapInfo<AA::ValueAndContext>
390     : public DenseMapInfo<AA::ValueAndContext::Base> {
391   using Base = DenseMapInfo<AA::ValueAndContext::Base>;
392   static inline AA::ValueAndContext getEmptyKey() {
393     return Base::getEmptyKey();
394   }
395   static inline AA::ValueAndContext getTombstoneKey() {
396     return Base::getTombstoneKey();
397   }
398   static unsigned getHashValue(const AA::ValueAndContext &VAC) {
399     return Base::getHashValue(VAC);
400   }
401 
402   static bool isEqual(const AA::ValueAndContext &LHS,
403                       const AA::ValueAndContext &RHS) {
404     return Base::isEqual(LHS, RHS);
405   }
406 };
407 
408 template <>
409 struct DenseMapInfo<AA::ValueScope> : public DenseMapInfo<unsigned char> {
410   using Base = DenseMapInfo<unsigned char>;
411   static inline AA::ValueScope getEmptyKey() {
412     return AA::ValueScope(Base::getEmptyKey());
413   }
414   static inline AA::ValueScope getTombstoneKey() {
415     return AA::ValueScope(Base::getTombstoneKey());
416   }
417   static unsigned getHashValue(const AA::ValueScope &S) {
418     return Base::getHashValue(S);
419   }
420 
421   static bool isEqual(const AA::ValueScope &LHS, const AA::ValueScope &RHS) {
422     return Base::isEqual(LHS, RHS);
423   }
424 };
425 
426 template <>
427 struct DenseMapInfo<const AA::InstExclusionSetTy *>
428     : public DenseMapInfo<void *> {
429   using super = DenseMapInfo<void *>;
430   static inline const AA::InstExclusionSetTy *getEmptyKey() {
431     return static_cast<const AA::InstExclusionSetTy *>(super::getEmptyKey());
432   }
433   static inline const AA::InstExclusionSetTy *getTombstoneKey() {
434     return static_cast<const AA::InstExclusionSetTy *>(
435         super::getTombstoneKey());
436   }
437   static unsigned getHashValue(const AA::InstExclusionSetTy *BES) {
438     unsigned H = 0;
439     if (BES)
440       for (const auto *II : *BES)
441         H += DenseMapInfo<const Instruction *>::getHashValue(II);
442     return H;
443   }
444   static bool isEqual(const AA::InstExclusionSetTy *LHS,
445                       const AA::InstExclusionSetTy *RHS) {
446     if (LHS == RHS)
447       return true;
448     if (LHS == getEmptyKey() || RHS == getEmptyKey() ||
449         LHS == getTombstoneKey() || RHS == getTombstoneKey())
450       return false;
451     if (!LHS || !RHS)
452       return ((LHS && LHS->empty()) || (RHS && RHS->empty()));
453     if (LHS->size() != RHS->size())
454       return false;
455     return llvm::set_is_subset(*LHS, *RHS);
456   }
457 };
458 
459 /// The value passed to the line option that defines the maximal initialization
460 /// chain length.
461 extern unsigned MaxInitializationChainLength;
462 
463 ///{
464 enum class ChangeStatus {
465   CHANGED,
466   UNCHANGED,
467 };
468 
469 ChangeStatus operator|(ChangeStatus l, ChangeStatus r);
470 ChangeStatus &operator|=(ChangeStatus &l, ChangeStatus r);
471 ChangeStatus operator&(ChangeStatus l, ChangeStatus r);
472 ChangeStatus &operator&=(ChangeStatus &l, ChangeStatus r);
473 
474 enum class DepClassTy {
475   REQUIRED, ///< The target cannot be valid if the source is not.
476   OPTIONAL, ///< The target may be valid if the source is not.
477   NONE,     ///< Do not track a dependence between source and target.
478 };
479 ///}
480 
481 /// The data structure for the nodes of a dependency graph
482 struct AADepGraphNode {
483 public:
484   virtual ~AADepGraphNode() = default;
485   using DepTy = PointerIntPair<AADepGraphNode *, 1>;
486 
487 protected:
488   /// Set of dependency graph nodes which should be updated if this one
489   /// is updated. The bit encodes if it is optional.
490   TinyPtrVector<DepTy> Deps;
491 
492   static AADepGraphNode *DepGetVal(DepTy &DT) { return DT.getPointer(); }
493   static AbstractAttribute *DepGetValAA(DepTy &DT) {
494     return cast<AbstractAttribute>(DT.getPointer());
495   }
496 
497   operator AbstractAttribute *() { return cast<AbstractAttribute>(this); }
498 
499 public:
500   using iterator =
501       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetVal)>;
502   using aaiterator =
503       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetValAA)>;
504 
505   aaiterator begin() { return aaiterator(Deps.begin(), &DepGetValAA); }
506   aaiterator end() { return aaiterator(Deps.end(), &DepGetValAA); }
507   iterator child_begin() { return iterator(Deps.begin(), &DepGetVal); }
508   iterator child_end() { return iterator(Deps.end(), &DepGetVal); }
509 
510   virtual void print(raw_ostream &OS) const { OS << "AADepNode Impl\n"; }
511   TinyPtrVector<DepTy> &getDeps() { return Deps; }
512 
513   friend struct Attributor;
514   friend struct AADepGraph;
515 };
516 
517 /// The data structure for the dependency graph
518 ///
519 /// Note that in this graph if there is an edge from A to B (A -> B),
520 /// then it means that B depends on A, and when the state of A is
521 /// updated, node B should also be updated
522 struct AADepGraph {
523   AADepGraph() = default;
524   ~AADepGraph() = default;
525 
526   using DepTy = AADepGraphNode::DepTy;
527   static AADepGraphNode *DepGetVal(DepTy &DT) { return DT.getPointer(); }
528   using iterator =
529       mapped_iterator<TinyPtrVector<DepTy>::iterator, decltype(&DepGetVal)>;
530 
531   /// There is no root node for the dependency graph. But the SCCIterator
532   /// requires a single entry point, so we maintain a fake("synthetic") root
533   /// node that depends on every node.
534   AADepGraphNode SyntheticRoot;
535   AADepGraphNode *GetEntryNode() { return &SyntheticRoot; }
536 
537   iterator begin() { return SyntheticRoot.child_begin(); }
538   iterator end() { return SyntheticRoot.child_end(); }
539 
540   void viewGraph();
541 
542   /// Dump graph to file
543   void dumpGraph();
544 
545   /// Print dependency graph
546   void print();
547 };
548 
549 /// Helper to describe and deal with positions in the LLVM-IR.
550 ///
551 /// A position in the IR is described by an anchor value and an "offset" that
552 /// could be the argument number, for call sites and arguments, or an indicator
553 /// of the "position kind". The kinds, specified in the Kind enum below, include
554 /// the locations in the attribute list, i.a., function scope and return value,
555 /// as well as a distinction between call sites and functions. Finally, there
556 /// are floating values that do not have a corresponding attribute list
557 /// position.
558 struct IRPosition {
559   // NOTE: In the future this definition can be changed to support recursive
560   // functions.
561   using CallBaseContext = CallBase;
562 
563   /// The positions we distinguish in the IR.
564   enum Kind : char {
565     IRP_INVALID,  ///< An invalid position.
566     IRP_FLOAT,    ///< A position that is not associated with a spot suitable
567                   ///< for attributes. This could be any value or instruction.
568     IRP_RETURNED, ///< An attribute for the function return value.
569     IRP_CALL_SITE_RETURNED, ///< An attribute for a call site return value.
570     IRP_FUNCTION,           ///< An attribute for a function (scope).
571     IRP_CALL_SITE,          ///< An attribute for a call site (function scope).
572     IRP_ARGUMENT,           ///< An attribute for a function argument.
573     IRP_CALL_SITE_ARGUMENT, ///< An attribute for a call site argument.
574   };
575 
576   /// Default constructor available to create invalid positions implicitly. All
577   /// other positions need to be created explicitly through the appropriate
578   /// static member function.
579   IRPosition() : Enc(nullptr, ENC_VALUE) { verify(); }
580 
581   /// Create a position describing the value of \p V.
582   static const IRPosition value(const Value &V,
583                                 const CallBaseContext *CBContext = nullptr) {
584     if (auto *Arg = dyn_cast<Argument>(&V))
585       return IRPosition::argument(*Arg, CBContext);
586     if (auto *CB = dyn_cast<CallBase>(&V))
587       return IRPosition::callsite_returned(*CB);
588     return IRPosition(const_cast<Value &>(V), IRP_FLOAT, CBContext);
589   }
590 
591   /// Create a position describing the instruction \p I. This is different from
592   /// the value version because call sites are treated as intrusctions rather
593   /// than their return value in this function.
594   static const IRPosition inst(const Instruction &I,
595                                const CallBaseContext *CBContext = nullptr) {
596     return IRPosition(const_cast<Instruction &>(I), IRP_FLOAT, CBContext);
597   }
598 
599   /// Create a position describing the function scope of \p F.
600   /// \p CBContext is used for call base specific analysis.
601   static const IRPosition function(const Function &F,
602                                    const CallBaseContext *CBContext = nullptr) {
603     return IRPosition(const_cast<Function &>(F), IRP_FUNCTION, CBContext);
604   }
605 
606   /// Create a position describing the returned value of \p F.
607   /// \p CBContext is used for call base specific analysis.
608   static const IRPosition returned(const Function &F,
609                                    const CallBaseContext *CBContext = nullptr) {
610     return IRPosition(const_cast<Function &>(F), IRP_RETURNED, CBContext);
611   }
612 
613   /// Create a position describing the argument \p Arg.
614   /// \p CBContext is used for call base specific analysis.
615   static const IRPosition argument(const Argument &Arg,
616                                    const CallBaseContext *CBContext = nullptr) {
617     return IRPosition(const_cast<Argument &>(Arg), IRP_ARGUMENT, CBContext);
618   }
619 
620   /// Create a position describing the function scope of \p CB.
621   static const IRPosition callsite_function(const CallBase &CB) {
622     return IRPosition(const_cast<CallBase &>(CB), IRP_CALL_SITE);
623   }
624 
625   /// Create a position describing the returned value of \p CB.
626   static const IRPosition callsite_returned(const CallBase &CB) {
627     return IRPosition(const_cast<CallBase &>(CB), IRP_CALL_SITE_RETURNED);
628   }
629 
630   /// Create a position describing the argument of \p CB at position \p ArgNo.
631   static const IRPosition callsite_argument(const CallBase &CB,
632                                             unsigned ArgNo) {
633     return IRPosition(const_cast<Use &>(CB.getArgOperandUse(ArgNo)),
634                       IRP_CALL_SITE_ARGUMENT);
635   }
636 
637   /// Create a position describing the argument of \p ACS at position \p ArgNo.
638   static const IRPosition callsite_argument(AbstractCallSite ACS,
639                                             unsigned ArgNo) {
640     if (ACS.getNumArgOperands() <= ArgNo)
641       return IRPosition();
642     int CSArgNo = ACS.getCallArgOperandNo(ArgNo);
643     if (CSArgNo >= 0)
644       return IRPosition::callsite_argument(
645           cast<CallBase>(*ACS.getInstruction()), CSArgNo);
646     return IRPosition();
647   }
648 
649   /// Create a position with function scope matching the "context" of \p IRP.
650   /// If \p IRP is a call site (see isAnyCallSitePosition()) then the result
651   /// will be a call site position, otherwise the function position of the
652   /// associated function.
653   static const IRPosition
654   function_scope(const IRPosition &IRP,
655                  const CallBaseContext *CBContext = nullptr) {
656     if (IRP.isAnyCallSitePosition()) {
657       return IRPosition::callsite_function(
658           cast<CallBase>(IRP.getAnchorValue()));
659     }
660     assert(IRP.getAssociatedFunction());
661     return IRPosition::function(*IRP.getAssociatedFunction(), CBContext);
662   }
663 
664   bool operator==(const IRPosition &RHS) const {
665     return Enc == RHS.Enc && RHS.CBContext == CBContext;
666   }
667   bool operator!=(const IRPosition &RHS) const { return !(*this == RHS); }
668 
669   /// Return the value this abstract attribute is anchored with.
670   ///
671   /// The anchor value might not be the associated value if the latter is not
672   /// sufficient to determine where arguments will be manifested. This is, so
673   /// far, only the case for call site arguments as the value is not sufficient
674   /// to pinpoint them. Instead, we can use the call site as an anchor.
675   Value &getAnchorValue() const {
676     switch (getEncodingBits()) {
677     case ENC_VALUE:
678     case ENC_RETURNED_VALUE:
679     case ENC_FLOATING_FUNCTION:
680       return *getAsValuePtr();
681     case ENC_CALL_SITE_ARGUMENT_USE:
682       return *(getAsUsePtr()->getUser());
683     default:
684       llvm_unreachable("Unkown encoding!");
685     };
686   }
687 
688   /// Return the associated function, if any.
689   Function *getAssociatedFunction() const {
690     if (auto *CB = dyn_cast<CallBase>(&getAnchorValue())) {
691       // We reuse the logic that associates callback calles to arguments of a
692       // call site here to identify the callback callee as the associated
693       // function.
694       if (Argument *Arg = getAssociatedArgument())
695         return Arg->getParent();
696       return CB->getCalledFunction();
697     }
698     return getAnchorScope();
699   }
700 
701   /// Return the associated argument, if any.
702   Argument *getAssociatedArgument() const;
703 
704   /// Return true if the position refers to a function interface, that is the
705   /// function scope, the function return, or an argument.
706   bool isFnInterfaceKind() const {
707     switch (getPositionKind()) {
708     case IRPosition::IRP_FUNCTION:
709     case IRPosition::IRP_RETURNED:
710     case IRPosition::IRP_ARGUMENT:
711       return true;
712     default:
713       return false;
714     }
715   }
716 
717   /// Return the Function surrounding the anchor value.
718   Function *getAnchorScope() const {
719     Value &V = getAnchorValue();
720     if (isa<Function>(V))
721       return &cast<Function>(V);
722     if (isa<Argument>(V))
723       return cast<Argument>(V).getParent();
724     if (isa<Instruction>(V))
725       return cast<Instruction>(V).getFunction();
726     return nullptr;
727   }
728 
729   /// Return the context instruction, if any.
730   Instruction *getCtxI() const {
731     Value &V = getAnchorValue();
732     if (auto *I = dyn_cast<Instruction>(&V))
733       return I;
734     if (auto *Arg = dyn_cast<Argument>(&V))
735       if (!Arg->getParent()->isDeclaration())
736         return &Arg->getParent()->getEntryBlock().front();
737     if (auto *F = dyn_cast<Function>(&V))
738       if (!F->isDeclaration())
739         return &(F->getEntryBlock().front());
740     return nullptr;
741   }
742 
743   /// Return the value this abstract attribute is associated with.
744   Value &getAssociatedValue() const {
745     if (getCallSiteArgNo() < 0 || isa<Argument>(&getAnchorValue()))
746       return getAnchorValue();
747     assert(isa<CallBase>(&getAnchorValue()) && "Expected a call base!");
748     return *cast<CallBase>(&getAnchorValue())
749                 ->getArgOperand(getCallSiteArgNo());
750   }
751 
752   /// Return the type this abstract attribute is associated with.
753   Type *getAssociatedType() const {
754     if (getPositionKind() == IRPosition::IRP_RETURNED)
755       return getAssociatedFunction()->getReturnType();
756     return getAssociatedValue().getType();
757   }
758 
759   /// Return the callee argument number of the associated value if it is an
760   /// argument or call site argument, otherwise a negative value. In contrast to
761   /// `getCallSiteArgNo` this method will always return the "argument number"
762   /// from the perspective of the callee. This may not the same as the call site
763   /// if this is a callback call.
764   int getCalleeArgNo() const {
765     return getArgNo(/* CallbackCalleeArgIfApplicable */ true);
766   }
767 
768   /// Return the call site argument number of the associated value if it is an
769   /// argument or call site argument, otherwise a negative value. In contrast to
770   /// `getCalleArgNo` this method will always return the "operand number" from
771   /// the perspective of the call site. This may not the same as the callee
772   /// perspective if this is a callback call.
773   int getCallSiteArgNo() const {
774     return getArgNo(/* CallbackCalleeArgIfApplicable */ false);
775   }
776 
777   /// Return the index in the attribute list for this position.
778   unsigned getAttrIdx() const {
779     switch (getPositionKind()) {
780     case IRPosition::IRP_INVALID:
781     case IRPosition::IRP_FLOAT:
782       break;
783     case IRPosition::IRP_FUNCTION:
784     case IRPosition::IRP_CALL_SITE:
785       return AttributeList::FunctionIndex;
786     case IRPosition::IRP_RETURNED:
787     case IRPosition::IRP_CALL_SITE_RETURNED:
788       return AttributeList::ReturnIndex;
789     case IRPosition::IRP_ARGUMENT:
790     case IRPosition::IRP_CALL_SITE_ARGUMENT:
791       return getCallSiteArgNo() + AttributeList::FirstArgIndex;
792     }
793     llvm_unreachable(
794         "There is no attribute index for a floating or invalid position!");
795   }
796 
797   /// Return the associated position kind.
798   Kind getPositionKind() const {
799     char EncodingBits = getEncodingBits();
800     if (EncodingBits == ENC_CALL_SITE_ARGUMENT_USE)
801       return IRP_CALL_SITE_ARGUMENT;
802     if (EncodingBits == ENC_FLOATING_FUNCTION)
803       return IRP_FLOAT;
804 
805     Value *V = getAsValuePtr();
806     if (!V)
807       return IRP_INVALID;
808     if (isa<Argument>(V))
809       return IRP_ARGUMENT;
810     if (isa<Function>(V))
811       return isReturnPosition(EncodingBits) ? IRP_RETURNED : IRP_FUNCTION;
812     if (isa<CallBase>(V))
813       return isReturnPosition(EncodingBits) ? IRP_CALL_SITE_RETURNED
814                                             : IRP_CALL_SITE;
815     return IRP_FLOAT;
816   }
817 
818   /// TODO: Figure out if the attribute related helper functions should live
819   ///       here or somewhere else.
820 
821   /// Return true if any kind in \p AKs existing in the IR at a position that
822   /// will affect this one. See also getAttrs(...).
823   /// \param IgnoreSubsumingPositions Flag to determine if subsuming positions,
824   ///                                 e.g., the function position if this is an
825   ///                                 argument position, should be ignored.
826   bool hasAttr(ArrayRef<Attribute::AttrKind> AKs,
827                bool IgnoreSubsumingPositions = false,
828                Attributor *A = nullptr) const;
829 
830   /// Return the attributes of any kind in \p AKs existing in the IR at a
831   /// position that will affect this one. While each position can only have a
832   /// single attribute of any kind in \p AKs, there are "subsuming" positions
833   /// that could have an attribute as well. This method returns all attributes
834   /// found in \p Attrs.
835   /// \param IgnoreSubsumingPositions Flag to determine if subsuming positions,
836   ///                                 e.g., the function position if this is an
837   ///                                 argument position, should be ignored.
838   void getAttrs(ArrayRef<Attribute::AttrKind> AKs,
839                 SmallVectorImpl<Attribute> &Attrs,
840                 bool IgnoreSubsumingPositions = false,
841                 Attributor *A = nullptr) const;
842 
843   /// Remove the attribute of kind \p AKs existing in the IR at this position.
844   void removeAttrs(ArrayRef<Attribute::AttrKind> AKs) const {
845     if (getPositionKind() == IRP_INVALID || getPositionKind() == IRP_FLOAT)
846       return;
847 
848     AttributeList AttrList;
849     auto *CB = dyn_cast<CallBase>(&getAnchorValue());
850     if (CB)
851       AttrList = CB->getAttributes();
852     else
853       AttrList = getAssociatedFunction()->getAttributes();
854 
855     LLVMContext &Ctx = getAnchorValue().getContext();
856     for (Attribute::AttrKind AK : AKs)
857       AttrList = AttrList.removeAttributeAtIndex(Ctx, getAttrIdx(), AK);
858 
859     if (CB)
860       CB->setAttributes(AttrList);
861     else
862       getAssociatedFunction()->setAttributes(AttrList);
863   }
864 
865   bool isAnyCallSitePosition() const {
866     switch (getPositionKind()) {
867     case IRPosition::IRP_CALL_SITE:
868     case IRPosition::IRP_CALL_SITE_RETURNED:
869     case IRPosition::IRP_CALL_SITE_ARGUMENT:
870       return true;
871     default:
872       return false;
873     }
874   }
875 
876   /// Return true if the position is an argument or call site argument.
877   bool isArgumentPosition() const {
878     switch (getPositionKind()) {
879     case IRPosition::IRP_ARGUMENT:
880     case IRPosition::IRP_CALL_SITE_ARGUMENT:
881       return true;
882     default:
883       return false;
884     }
885   }
886 
887   /// Return the same position without the call base context.
888   IRPosition stripCallBaseContext() const {
889     IRPosition Result = *this;
890     Result.CBContext = nullptr;
891     return Result;
892   }
893 
894   /// Get the call base context from the position.
895   const CallBaseContext *getCallBaseContext() const { return CBContext; }
896 
897   /// Check if the position has any call base context.
898   bool hasCallBaseContext() const { return CBContext != nullptr; }
899 
900   /// Special DenseMap key values.
901   ///
902   ///{
903   static const IRPosition EmptyKey;
904   static const IRPosition TombstoneKey;
905   ///}
906 
907   /// Conversion into a void * to allow reuse of pointer hashing.
908   operator void *() const { return Enc.getOpaqueValue(); }
909 
910 private:
911   /// Private constructor for special values only!
912   explicit IRPosition(void *Ptr, const CallBaseContext *CBContext = nullptr)
913       : CBContext(CBContext) {
914     Enc.setFromOpaqueValue(Ptr);
915   }
916 
917   /// IRPosition anchored at \p AnchorVal with kind/argument numbet \p PK.
918   explicit IRPosition(Value &AnchorVal, Kind PK,
919                       const CallBaseContext *CBContext = nullptr)
920       : CBContext(CBContext) {
921     switch (PK) {
922     case IRPosition::IRP_INVALID:
923       llvm_unreachable("Cannot create invalid IRP with an anchor value!");
924       break;
925     case IRPosition::IRP_FLOAT:
926       // Special case for floating functions.
927       if (isa<Function>(AnchorVal) || isa<CallBase>(AnchorVal))
928         Enc = {&AnchorVal, ENC_FLOATING_FUNCTION};
929       else
930         Enc = {&AnchorVal, ENC_VALUE};
931       break;
932     case IRPosition::IRP_FUNCTION:
933     case IRPosition::IRP_CALL_SITE:
934       Enc = {&AnchorVal, ENC_VALUE};
935       break;
936     case IRPosition::IRP_RETURNED:
937     case IRPosition::IRP_CALL_SITE_RETURNED:
938       Enc = {&AnchorVal, ENC_RETURNED_VALUE};
939       break;
940     case IRPosition::IRP_ARGUMENT:
941       Enc = {&AnchorVal, ENC_VALUE};
942       break;
943     case IRPosition::IRP_CALL_SITE_ARGUMENT:
944       llvm_unreachable(
945           "Cannot create call site argument IRP with an anchor value!");
946       break;
947     }
948     verify();
949   }
950 
951   /// Return the callee argument number of the associated value if it is an
952   /// argument or call site argument. See also `getCalleeArgNo` and
953   /// `getCallSiteArgNo`.
954   int getArgNo(bool CallbackCalleeArgIfApplicable) const {
955     if (CallbackCalleeArgIfApplicable)
956       if (Argument *Arg = getAssociatedArgument())
957         return Arg->getArgNo();
958     switch (getPositionKind()) {
959     case IRPosition::IRP_ARGUMENT:
960       return cast<Argument>(getAsValuePtr())->getArgNo();
961     case IRPosition::IRP_CALL_SITE_ARGUMENT: {
962       Use &U = *getAsUsePtr();
963       return cast<CallBase>(U.getUser())->getArgOperandNo(&U);
964     }
965     default:
966       return -1;
967     }
968   }
969 
970   /// IRPosition for the use \p U. The position kind \p PK needs to be
971   /// IRP_CALL_SITE_ARGUMENT, the anchor value is the user, the associated value
972   /// the used value.
973   explicit IRPosition(Use &U, Kind PK) {
974     assert(PK == IRP_CALL_SITE_ARGUMENT &&
975            "Use constructor is for call site arguments only!");
976     Enc = {&U, ENC_CALL_SITE_ARGUMENT_USE};
977     verify();
978   }
979 
980   /// Verify internal invariants.
981   void verify();
982 
983   /// Return the attributes of kind \p AK existing in the IR as attribute.
984   bool getAttrsFromIRAttr(Attribute::AttrKind AK,
985                           SmallVectorImpl<Attribute> &Attrs) const;
986 
987   /// Return the attributes of kind \p AK existing in the IR as operand bundles
988   /// of an llvm.assume.
989   bool getAttrsFromAssumes(Attribute::AttrKind AK,
990                            SmallVectorImpl<Attribute> &Attrs,
991                            Attributor &A) const;
992 
993   /// Return the underlying pointer as Value *, valid for all positions but
994   /// IRP_CALL_SITE_ARGUMENT.
995   Value *getAsValuePtr() const {
996     assert(getEncodingBits() != ENC_CALL_SITE_ARGUMENT_USE &&
997            "Not a value pointer!");
998     return reinterpret_cast<Value *>(Enc.getPointer());
999   }
1000 
1001   /// Return the underlying pointer as Use *, valid only for
1002   /// IRP_CALL_SITE_ARGUMENT positions.
1003   Use *getAsUsePtr() const {
1004     assert(getEncodingBits() == ENC_CALL_SITE_ARGUMENT_USE &&
1005            "Not a value pointer!");
1006     return reinterpret_cast<Use *>(Enc.getPointer());
1007   }
1008 
1009   /// Return true if \p EncodingBits describe a returned or call site returned
1010   /// position.
1011   static bool isReturnPosition(char EncodingBits) {
1012     return EncodingBits == ENC_RETURNED_VALUE;
1013   }
1014 
1015   /// Return true if the encoding bits describe a returned or call site returned
1016   /// position.
1017   bool isReturnPosition() const { return isReturnPosition(getEncodingBits()); }
1018 
1019   /// The encoding of the IRPosition is a combination of a pointer and two
1020   /// encoding bits. The values of the encoding bits are defined in the enum
1021   /// below. The pointer is either a Value* (for the first three encoding bit
1022   /// combinations) or Use* (for ENC_CALL_SITE_ARGUMENT_USE).
1023   ///
1024   ///{
1025   enum {
1026     ENC_VALUE = 0b00,
1027     ENC_RETURNED_VALUE = 0b01,
1028     ENC_FLOATING_FUNCTION = 0b10,
1029     ENC_CALL_SITE_ARGUMENT_USE = 0b11,
1030   };
1031 
1032   // Reserve the maximal amount of bits so there is no need to mask out the
1033   // remaining ones. We will not encode anything else in the pointer anyway.
1034   static constexpr int NumEncodingBits =
1035       PointerLikeTypeTraits<void *>::NumLowBitsAvailable;
1036   static_assert(NumEncodingBits >= 2, "At least two bits are required!");
1037 
1038   /// The pointer with the encoding bits.
1039   PointerIntPair<void *, NumEncodingBits, char> Enc;
1040   ///}
1041 
1042   /// Call base context. Used for callsite specific analysis.
1043   const CallBaseContext *CBContext = nullptr;
1044 
1045   /// Return the encoding bits.
1046   char getEncodingBits() const { return Enc.getInt(); }
1047 };
1048 
1049 /// Helper that allows IRPosition as a key in a DenseMap.
1050 template <> struct DenseMapInfo<IRPosition> {
1051   static inline IRPosition getEmptyKey() { return IRPosition::EmptyKey; }
1052   static inline IRPosition getTombstoneKey() {
1053     return IRPosition::TombstoneKey;
1054   }
1055   static unsigned getHashValue(const IRPosition &IRP) {
1056     return (DenseMapInfo<void *>::getHashValue(IRP) << 4) ^
1057            (DenseMapInfo<Value *>::getHashValue(IRP.getCallBaseContext()));
1058   }
1059 
1060   static bool isEqual(const IRPosition &a, const IRPosition &b) {
1061     return a == b;
1062   }
1063 };
1064 
1065 /// A visitor class for IR positions.
1066 ///
1067 /// Given a position P, the SubsumingPositionIterator allows to visit "subsuming
1068 /// positions" wrt. attributes/information. Thus, if a piece of information
1069 /// holds for a subsuming position, it also holds for the position P.
1070 ///
1071 /// The subsuming positions always include the initial position and then,
1072 /// depending on the position kind, additionally the following ones:
1073 /// - for IRP_RETURNED:
1074 ///   - the function (IRP_FUNCTION)
1075 /// - for IRP_ARGUMENT:
1076 ///   - the function (IRP_FUNCTION)
1077 /// - for IRP_CALL_SITE:
1078 ///   - the callee (IRP_FUNCTION), if known
1079 /// - for IRP_CALL_SITE_RETURNED:
1080 ///   - the callee (IRP_RETURNED), if known
1081 ///   - the call site (IRP_FUNCTION)
1082 ///   - the callee (IRP_FUNCTION), if known
1083 /// - for IRP_CALL_SITE_ARGUMENT:
1084 ///   - the argument of the callee (IRP_ARGUMENT), if known
1085 ///   - the callee (IRP_FUNCTION), if known
1086 ///   - the position the call site argument is associated with if it is not
1087 ///     anchored to the call site, e.g., if it is an argument then the argument
1088 ///     (IRP_ARGUMENT)
1089 class SubsumingPositionIterator {
1090   SmallVector<IRPosition, 4> IRPositions;
1091   using iterator = decltype(IRPositions)::iterator;
1092 
1093 public:
1094   SubsumingPositionIterator(const IRPosition &IRP);
1095   iterator begin() { return IRPositions.begin(); }
1096   iterator end() { return IRPositions.end(); }
1097 };
1098 
1099 /// Wrapper for FunctionAnalysisManager.
1100 struct AnalysisGetter {
1101   // The client may be running the old pass manager, in which case, we need to
1102   // map the requested Analysis to its equivalent wrapper in the old pass
1103   // manager. The scheme implemented here does not require every Analysis to be
1104   // updated. Only those new analyses that the client cares about in the old
1105   // pass manager need to expose a LegacyWrapper type, and that wrapper should
1106   // support a getResult() method that matches the new Analysis.
1107   //
1108   // We need SFINAE to check for the LegacyWrapper, but function templates don't
1109   // allow partial specialization, which is needed in this case. So instead, we
1110   // use a constexpr bool to perform the SFINAE, and then use this information
1111   // inside the function template.
1112   template <typename, typename = void> static constexpr bool HasLegacyWrapper = false;
1113 
1114   template <typename Analysis>
1115   typename Analysis::Result *getAnalysis(const Function &F) {
1116     if (FAM)
1117       return &FAM->getResult<Analysis>(const_cast<Function &>(F));
1118     if constexpr (HasLegacyWrapper<Analysis>)
1119       if (LegacyPass)
1120         return &LegacyPass
1121                     ->getAnalysis<typename Analysis::LegacyWrapper>(
1122                         const_cast<Function &>(F))
1123                     .getResult();
1124     return nullptr;
1125   }
1126 
1127   AnalysisGetter(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
1128   AnalysisGetter(Pass *P) : LegacyPass(P) {}
1129   AnalysisGetter() = default;
1130 
1131 private:
1132   FunctionAnalysisManager *FAM = nullptr;
1133   Pass *LegacyPass = nullptr;
1134 };
1135 
1136 template <typename Analysis>
1137 constexpr bool AnalysisGetter::HasLegacyWrapper<
1138       Analysis, std::void_t<typename Analysis::LegacyWrapper>> = true;
1139 
1140 /// Data structure to hold cached (LLVM-IR) information.
1141 ///
1142 /// All attributes are given an InformationCache object at creation time to
1143 /// avoid inspection of the IR by all of them individually. This default
1144 /// InformationCache will hold information required by 'default' attributes,
1145 /// thus the ones deduced when Attributor::identifyDefaultAbstractAttributes(..)
1146 /// is called.
1147 ///
1148 /// If custom abstract attributes, registered manually through
1149 /// Attributor::registerAA(...), need more information, especially if it is not
1150 /// reusable, it is advised to inherit from the InformationCache and cast the
1151 /// instance down in the abstract attributes.
1152 struct InformationCache {
1153   InformationCache(const Module &M, AnalysisGetter &AG,
1154                    BumpPtrAllocator &Allocator, SetVector<Function *> *CGSCC)
1155       : DL(M.getDataLayout()), Allocator(Allocator),
1156         Explorer(
1157             /* ExploreInterBlock */ true, /* ExploreCFGForward */ true,
1158             /* ExploreCFGBackward */ true,
1159             /* LIGetter */
1160             [&](const Function &F) { return AG.getAnalysis<LoopAnalysis>(F); },
1161             /* DTGetter */
1162             [&](const Function &F) {
1163               return AG.getAnalysis<DominatorTreeAnalysis>(F);
1164             },
1165             /* PDTGetter */
1166             [&](const Function &F) {
1167               return AG.getAnalysis<PostDominatorTreeAnalysis>(F);
1168             }),
1169         AG(AG), TargetTriple(M.getTargetTriple()) {
1170     if (CGSCC)
1171       initializeModuleSlice(*CGSCC);
1172   }
1173 
1174   ~InformationCache() {
1175     // The FunctionInfo objects are allocated via a BumpPtrAllocator, we call
1176     // the destructor manually.
1177     for (auto &It : FuncInfoMap)
1178       It.getSecond()->~FunctionInfo();
1179     // Same is true for the instruction exclusions sets.
1180     using AA::InstExclusionSetTy;
1181     for (auto *BES : BESets)
1182       BES->~InstExclusionSetTy();
1183   }
1184 
1185   /// Apply \p CB to all uses of \p F. If \p LookThroughConstantExprUses is
1186   /// true, constant expression users are not given to \p CB but their uses are
1187   /// traversed transitively.
1188   template <typename CBTy>
1189   static void foreachUse(Function &F, CBTy CB,
1190                          bool LookThroughConstantExprUses = true) {
1191     SmallVector<Use *, 8> Worklist(make_pointer_range(F.uses()));
1192 
1193     for (unsigned Idx = 0; Idx < Worklist.size(); ++Idx) {
1194       Use &U = *Worklist[Idx];
1195 
1196       // Allow use in constant bitcasts and simply look through them.
1197       if (LookThroughConstantExprUses && isa<ConstantExpr>(U.getUser())) {
1198         for (Use &CEU : cast<ConstantExpr>(U.getUser())->uses())
1199           Worklist.push_back(&CEU);
1200         continue;
1201       }
1202 
1203       CB(U);
1204     }
1205   }
1206 
1207   /// Initialize the ModuleSlice member based on \p SCC. ModuleSlices contains
1208   /// (a subset of) all functions that we can look at during this SCC traversal.
1209   /// This includes functions (transitively) called from the SCC and the
1210   /// (transitive) callers of SCC functions. We also can look at a function if
1211   /// there is a "reference edge", i.a., if the function somehow uses (!=calls)
1212   /// a function in the SCC or a caller of a function in the SCC.
1213   void initializeModuleSlice(SetVector<Function *> &SCC) {
1214     ModuleSlice.insert(SCC.begin(), SCC.end());
1215 
1216     SmallPtrSet<Function *, 16> Seen;
1217     SmallVector<Function *, 16> Worklist(SCC.begin(), SCC.end());
1218     while (!Worklist.empty()) {
1219       Function *F = Worklist.pop_back_val();
1220       ModuleSlice.insert(F);
1221 
1222       for (Instruction &I : instructions(*F))
1223         if (auto *CB = dyn_cast<CallBase>(&I))
1224           if (Function *Callee = CB->getCalledFunction())
1225             if (Seen.insert(Callee).second)
1226               Worklist.push_back(Callee);
1227     }
1228 
1229     Seen.clear();
1230     Worklist.append(SCC.begin(), SCC.end());
1231     while (!Worklist.empty()) {
1232       Function *F = Worklist.pop_back_val();
1233       ModuleSlice.insert(F);
1234 
1235       // Traverse all transitive uses.
1236       foreachUse(*F, [&](Use &U) {
1237         if (auto *UsrI = dyn_cast<Instruction>(U.getUser()))
1238           if (Seen.insert(UsrI->getFunction()).second)
1239             Worklist.push_back(UsrI->getFunction());
1240       });
1241     }
1242   }
1243 
1244   /// The slice of the module we are allowed to look at.
1245   SmallPtrSet<Function *, 8> ModuleSlice;
1246 
1247   /// A vector type to hold instructions.
1248   using InstructionVectorTy = SmallVector<Instruction *, 8>;
1249 
1250   /// A map type from opcodes to instructions with this opcode.
1251   using OpcodeInstMapTy = DenseMap<unsigned, InstructionVectorTy *>;
1252 
1253   /// Return the map that relates "interesting" opcodes with all instructions
1254   /// with that opcode in \p F.
1255   OpcodeInstMapTy &getOpcodeInstMapForFunction(const Function &F) {
1256     return getFunctionInfo(F).OpcodeInstMap;
1257   }
1258 
1259   /// Return the instructions in \p F that may read or write memory.
1260   InstructionVectorTy &getReadOrWriteInstsForFunction(const Function &F) {
1261     return getFunctionInfo(F).RWInsts;
1262   }
1263 
1264   /// Return MustBeExecutedContextExplorer
1265   MustBeExecutedContextExplorer &getMustBeExecutedContextExplorer() {
1266     return Explorer;
1267   }
1268 
1269   /// Return TargetLibraryInfo for function \p F.
1270   TargetLibraryInfo *getTargetLibraryInfoForFunction(const Function &F) {
1271     return AG.getAnalysis<TargetLibraryAnalysis>(F);
1272   }
1273 
1274   /// Return AliasAnalysis Result for function \p F.
1275   AAResults *getAAResultsForFunction(const Function &F);
1276 
1277   /// Return true if \p Arg is involved in a must-tail call, thus the argument
1278   /// of the caller or callee.
1279   bool isInvolvedInMustTailCall(const Argument &Arg) {
1280     FunctionInfo &FI = getFunctionInfo(*Arg.getParent());
1281     return FI.CalledViaMustTail || FI.ContainsMustTailCall;
1282   }
1283 
1284   bool isOnlyUsedByAssume(const Instruction &I) const {
1285     return AssumeOnlyValues.contains(&I);
1286   }
1287 
1288   /// Return the analysis result from a pass \p AP for function \p F.
1289   template <typename AP>
1290   typename AP::Result *getAnalysisResultForFunction(const Function &F) {
1291     return AG.getAnalysis<AP>(F);
1292   }
1293 
1294   /// Return datalayout used in the module.
1295   const DataLayout &getDL() { return DL; }
1296 
1297   /// Return the map conaining all the knowledge we have from `llvm.assume`s.
1298   const RetainedKnowledgeMap &getKnowledgeMap() const { return KnowledgeMap; }
1299 
1300   /// Given \p BES, return a uniqued version. \p BES is destroyed in the
1301   /// process.
1302   const AA::InstExclusionSetTy *
1303   getOrCreateUniqueBlockExecutionSet(const AA::InstExclusionSetTy *BES) {
1304     auto It = BESets.find(BES);
1305     if (It != BESets.end())
1306       return *It;
1307     auto *UniqueBES = new (Allocator) AA::InstExclusionSetTy(*BES);
1308     BESets.insert(UniqueBES);
1309     return UniqueBES;
1310   }
1311 
1312   /// Check whether \p F is part of module slice.
1313   bool isInModuleSlice(const Function &F) {
1314     return ModuleSlice.empty() || ModuleSlice.count(const_cast<Function *>(&F));
1315   }
1316 
1317   /// Return true if the stack (llvm::Alloca) can be accessed by other threads.
1318   bool stackIsAccessibleByOtherThreads() { return !targetIsGPU(); }
1319 
1320   /// Return true if the target is a GPU.
1321   bool targetIsGPU() {
1322     return TargetTriple.isAMDGPU() || TargetTriple.isNVPTX();
1323   }
1324 
1325 private:
1326   struct FunctionInfo {
1327     ~FunctionInfo();
1328 
1329     /// A nested map that remembers all instructions in a function with a
1330     /// certain instruction opcode (Instruction::getOpcode()).
1331     OpcodeInstMapTy OpcodeInstMap;
1332 
1333     /// A map from functions to their instructions that may read or write
1334     /// memory.
1335     InstructionVectorTy RWInsts;
1336 
1337     /// Function is called by a `musttail` call.
1338     bool CalledViaMustTail;
1339 
1340     /// Function contains a `musttail` call.
1341     bool ContainsMustTailCall;
1342   };
1343 
1344   /// A map type from functions to informatio about it.
1345   DenseMap<const Function *, FunctionInfo *> FuncInfoMap;
1346 
1347   /// Return information about the function \p F, potentially by creating it.
1348   FunctionInfo &getFunctionInfo(const Function &F) {
1349     FunctionInfo *&FI = FuncInfoMap[&F];
1350     if (!FI) {
1351       FI = new (Allocator) FunctionInfo();
1352       initializeInformationCache(F, *FI);
1353     }
1354     return *FI;
1355   }
1356 
1357   /// Initialize the function information cache \p FI for the function \p F.
1358   ///
1359   /// This method needs to be called for all function that might be looked at
1360   /// through the information cache interface *prior* to looking at them.
1361   void initializeInformationCache(const Function &F, FunctionInfo &FI);
1362 
1363   /// The datalayout used in the module.
1364   const DataLayout &DL;
1365 
1366   /// The allocator used to allocate memory, e.g. for `FunctionInfo`s.
1367   BumpPtrAllocator &Allocator;
1368 
1369   /// MustBeExecutedContextExplorer
1370   MustBeExecutedContextExplorer Explorer;
1371 
1372   /// A map with knowledge retained in `llvm.assume` instructions.
1373   RetainedKnowledgeMap KnowledgeMap;
1374 
1375   /// A container for all instructions that are only used by `llvm.assume`.
1376   SetVector<const Instruction *> AssumeOnlyValues;
1377 
1378   /// Cache for block sets to allow reuse.
1379   DenseSet<AA::InstExclusionSetTy *> BESets;
1380 
1381   /// Getters for analysis.
1382   AnalysisGetter &AG;
1383 
1384   /// Set of inlineable functions
1385   SmallPtrSet<const Function *, 8> InlineableFunctions;
1386 
1387   /// The triple describing the target machine.
1388   Triple TargetTriple;
1389 
1390   /// Give the Attributor access to the members so
1391   /// Attributor::identifyDefaultAbstractAttributes(...) can initialize them.
1392   friend struct Attributor;
1393 };
1394 
1395 /// Configuration for the Attributor.
1396 struct AttributorConfig {
1397 
1398   AttributorConfig(CallGraphUpdater &CGUpdater) : CGUpdater(CGUpdater) {}
1399 
1400   /// Is the user of the Attributor a module pass or not. This determines what
1401   /// IR we can look at and modify. If it is a module pass we might deduce facts
1402   /// outside the initial function set and modify functions outside that set,
1403   /// but only as part of the optimization of the functions in the initial
1404   /// function set. For CGSCC passes we can look at the IR of the module slice
1405   /// but never run any deduction, or perform any modification, outside the
1406   /// initial function set (which we assume is the SCC).
1407   bool IsModulePass = true;
1408 
1409   /// Flag to determine if we can delete functions or keep dead ones around.
1410   bool DeleteFns = true;
1411 
1412   /// Flag to determine if we rewrite function signatures.
1413   bool RewriteSignatures = true;
1414 
1415   /// Flag to determine if we want to initialize all default AAs for an internal
1416   /// function marked live. See also: InitializationCallback>
1417   bool DefaultInitializeLiveInternals = true;
1418 
1419   /// Callback function to be invoked on internal functions marked live.
1420   std::function<void(Attributor &A, const Function &F)> InitializationCallback =
1421       nullptr;
1422 
1423   /// Helper to update an underlying call graph and to delete functions.
1424   CallGraphUpdater &CGUpdater;
1425 
1426   /// If not null, a set limiting the attribute opportunities.
1427   DenseSet<const char *> *Allowed = nullptr;
1428 
1429   /// Maximum number of iterations to run until fixpoint.
1430   std::optional<unsigned> MaxFixpointIterations;
1431 
1432   /// A callback function that returns an ORE object from a Function pointer.
1433   ///{
1434   using OptimizationRemarkGetter =
1435       function_ref<OptimizationRemarkEmitter &(Function *)>;
1436   OptimizationRemarkGetter OREGetter = nullptr;
1437   ///}
1438 
1439   /// The name of the pass running the attributor, used to emit remarks.
1440   const char *PassName = nullptr;
1441 };
1442 
1443 /// The fixpoint analysis framework that orchestrates the attribute deduction.
1444 ///
1445 /// The Attributor provides a general abstract analysis framework (guided
1446 /// fixpoint iteration) as well as helper functions for the deduction of
1447 /// (LLVM-IR) attributes. However, also other code properties can be deduced,
1448 /// propagated, and ultimately manifested through the Attributor framework. This
1449 /// is particularly useful if these properties interact with attributes and a
1450 /// co-scheduled deduction allows to improve the solution. Even if not, thus if
1451 /// attributes/properties are completely isolated, they should use the
1452 /// Attributor framework to reduce the number of fixpoint iteration frameworks
1453 /// in the code base. Note that the Attributor design makes sure that isolated
1454 /// attributes are not impacted, in any way, by others derived at the same time
1455 /// if there is no cross-reasoning performed.
1456 ///
1457 /// The public facing interface of the Attributor is kept simple and basically
1458 /// allows abstract attributes to one thing, query abstract attributes
1459 /// in-flight. There are two reasons to do this:
1460 ///    a) The optimistic state of one abstract attribute can justify an
1461 ///       optimistic state of another, allowing to framework to end up with an
1462 ///       optimistic (=best possible) fixpoint instead of one based solely on
1463 ///       information in the IR.
1464 ///    b) This avoids reimplementing various kinds of lookups, e.g., to check
1465 ///       for existing IR attributes, in favor of a single lookups interface
1466 ///       provided by an abstract attribute subclass.
1467 ///
1468 /// NOTE: The mechanics of adding a new "concrete" abstract attribute are
1469 ///       described in the file comment.
1470 struct Attributor {
1471 
1472   /// Constructor
1473   ///
1474   /// \param Functions The set of functions we are deriving attributes for.
1475   /// \param InfoCache Cache to hold various information accessible for
1476   ///                  the abstract attributes.
1477   /// \param Configuration The Attributor configuration which determines what
1478   ///                      generic features to use.
1479   Attributor(SetVector<Function *> &Functions, InformationCache &InfoCache,
1480              AttributorConfig Configuration)
1481       : Allocator(InfoCache.Allocator), Functions(Functions),
1482         InfoCache(InfoCache), Configuration(Configuration) {}
1483 
1484   ~Attributor();
1485 
1486   /// Run the analyses until a fixpoint is reached or enforced (timeout).
1487   ///
1488   /// The attributes registered with this Attributor can be used after as long
1489   /// as the Attributor is not destroyed (it owns the attributes now).
1490   ///
1491   /// \Returns CHANGED if the IR was changed, otherwise UNCHANGED.
1492   ChangeStatus run();
1493 
1494   /// Lookup an abstract attribute of type \p AAType at position \p IRP. While
1495   /// no abstract attribute is found equivalent positions are checked, see
1496   /// SubsumingPositionIterator. Thus, the returned abstract attribute
1497   /// might be anchored at a different position, e.g., the callee if \p IRP is a
1498   /// call base.
1499   ///
1500   /// This method is the only (supported) way an abstract attribute can retrieve
1501   /// information from another abstract attribute. As an example, take an
1502   /// abstract attribute that determines the memory access behavior for a
1503   /// argument (readnone, readonly, ...). It should use `getAAFor` to get the
1504   /// most optimistic information for other abstract attributes in-flight, e.g.
1505   /// the one reasoning about the "captured" state for the argument or the one
1506   /// reasoning on the memory access behavior of the function as a whole.
1507   ///
1508   /// If the DepClass enum is set to `DepClassTy::None` the dependence from
1509   /// \p QueryingAA to the return abstract attribute is not automatically
1510   /// recorded. This should only be used if the caller will record the
1511   /// dependence explicitly if necessary, thus if it the returned abstract
1512   /// attribute is used for reasoning. To record the dependences explicitly use
1513   /// the `Attributor::recordDependence` method.
1514   template <typename AAType>
1515   const AAType &getAAFor(const AbstractAttribute &QueryingAA,
1516                          const IRPosition &IRP, DepClassTy DepClass) {
1517     return getOrCreateAAFor<AAType>(IRP, &QueryingAA, DepClass,
1518                                     /* ForceUpdate */ false);
1519   }
1520 
1521   /// Similar to getAAFor but the return abstract attribute will be updated (via
1522   /// `AbstractAttribute::update`) even if it is found in the cache. This is
1523   /// especially useful for AAIsDead as changes in liveness can make updates
1524   /// possible/useful that were not happening before as the abstract attribute
1525   /// was assumed dead.
1526   template <typename AAType>
1527   const AAType &getAndUpdateAAFor(const AbstractAttribute &QueryingAA,
1528                                   const IRPosition &IRP, DepClassTy DepClass) {
1529     return getOrCreateAAFor<AAType>(IRP, &QueryingAA, DepClass,
1530                                     /* ForceUpdate */ true);
1531   }
1532 
1533   /// The version of getAAFor that allows to omit a querying abstract
1534   /// attribute. Using this after Attributor started running is restricted to
1535   /// only the Attributor itself. Initial seeding of AAs can be done via this
1536   /// function.
1537   /// NOTE: ForceUpdate is ignored in any stage other than the update stage.
1538   template <typename AAType>
1539   const AAType &getOrCreateAAFor(IRPosition IRP,
1540                                  const AbstractAttribute *QueryingAA,
1541                                  DepClassTy DepClass, bool ForceUpdate = false,
1542                                  bool UpdateAfterInit = true) {
1543     if (!shouldPropagateCallBaseContext(IRP))
1544       IRP = IRP.stripCallBaseContext();
1545 
1546     if (AAType *AAPtr = lookupAAFor<AAType>(IRP, QueryingAA, DepClass,
1547                                             /* AllowInvalidState */ true)) {
1548       if (ForceUpdate && Phase == AttributorPhase::UPDATE)
1549         updateAA(*AAPtr);
1550       return *AAPtr;
1551     }
1552 
1553     // No matching attribute found, create one.
1554     // Use the static create method.
1555     auto &AA = AAType::createForPosition(IRP, *this);
1556 
1557     // Always register a new attribute to make sure we clean up the allocated
1558     // memory properly.
1559     registerAA(AA);
1560 
1561     // If we are currenty seeding attributes, enforce seeding rules.
1562     if (Phase == AttributorPhase::SEEDING && !shouldSeedAttribute(AA)) {
1563       AA.getState().indicatePessimisticFixpoint();
1564       return AA;
1565     }
1566 
1567     // For now we ignore naked and optnone functions.
1568     bool Invalidate =
1569         Configuration.Allowed && !Configuration.Allowed->count(&AAType::ID);
1570     const Function *AnchorFn = IRP.getAnchorScope();
1571     if (AnchorFn) {
1572       Invalidate |=
1573           AnchorFn->hasFnAttribute(Attribute::Naked) ||
1574           AnchorFn->hasFnAttribute(Attribute::OptimizeNone) ||
1575           (!isModulePass() && !getInfoCache().isInModuleSlice(*AnchorFn));
1576     }
1577 
1578     // Avoid too many nested initializations to prevent a stack overflow.
1579     Invalidate |= InitializationChainLength > MaxInitializationChainLength;
1580 
1581     // Bootstrap the new attribute with an initial update to propagate
1582     // information, e.g., function -> call site. If it is not on a given
1583     // Allowed we will not perform updates at all.
1584     if (Invalidate) {
1585       AA.getState().indicatePessimisticFixpoint();
1586       return AA;
1587     }
1588 
1589     {
1590       TimeTraceScope TimeScope(AA.getName() + "::initialize");
1591       ++InitializationChainLength;
1592       AA.initialize(*this);
1593       --InitializationChainLength;
1594     }
1595 
1596     // We update only AAs associated with functions in the Functions set or
1597     // call sites of them.
1598     if ((AnchorFn && !isRunOn(const_cast<Function *>(AnchorFn))) &&
1599         !isRunOn(IRP.getAssociatedFunction())) {
1600       AA.getState().indicatePessimisticFixpoint();
1601       return AA;
1602     }
1603 
1604     // If this is queried in the manifest stage, we force the AA to indicate
1605     // pessimistic fixpoint immediately.
1606     if (Phase == AttributorPhase::MANIFEST ||
1607         Phase == AttributorPhase::CLEANUP) {
1608       AA.getState().indicatePessimisticFixpoint();
1609       return AA;
1610     }
1611 
1612     // Allow seeded attributes to declare dependencies.
1613     // Remember the seeding state.
1614     if (UpdateAfterInit) {
1615       AttributorPhase OldPhase = Phase;
1616       Phase = AttributorPhase::UPDATE;
1617 
1618       updateAA(AA);
1619 
1620       Phase = OldPhase;
1621     }
1622 
1623     if (QueryingAA && AA.getState().isValidState())
1624       recordDependence(AA, const_cast<AbstractAttribute &>(*QueryingAA),
1625                        DepClass);
1626     return AA;
1627   }
1628   template <typename AAType>
1629   const AAType &getOrCreateAAFor(const IRPosition &IRP) {
1630     return getOrCreateAAFor<AAType>(IRP, /* QueryingAA */ nullptr,
1631                                     DepClassTy::NONE);
1632   }
1633 
1634   /// Return the attribute of \p AAType for \p IRP if existing and valid. This
1635   /// also allows non-AA users lookup.
1636   template <typename AAType>
1637   AAType *lookupAAFor(const IRPosition &IRP,
1638                       const AbstractAttribute *QueryingAA = nullptr,
1639                       DepClassTy DepClass = DepClassTy::OPTIONAL,
1640                       bool AllowInvalidState = false) {
1641     static_assert(std::is_base_of<AbstractAttribute, AAType>::value,
1642                   "Cannot query an attribute with a type not derived from "
1643                   "'AbstractAttribute'!");
1644     // Lookup the abstract attribute of type AAType. If found, return it after
1645     // registering a dependence of QueryingAA on the one returned attribute.
1646     AbstractAttribute *AAPtr = AAMap.lookup({&AAType::ID, IRP});
1647     if (!AAPtr)
1648       return nullptr;
1649 
1650     AAType *AA = static_cast<AAType *>(AAPtr);
1651 
1652     // Do not register a dependence on an attribute with an invalid state.
1653     if (DepClass != DepClassTy::NONE && QueryingAA &&
1654         AA->getState().isValidState())
1655       recordDependence(*AA, const_cast<AbstractAttribute &>(*QueryingAA),
1656                        DepClass);
1657 
1658     // Return nullptr if this attribute has an invalid state.
1659     if (!AllowInvalidState && !AA->getState().isValidState())
1660       return nullptr;
1661     return AA;
1662   }
1663 
1664   /// Allows a query AA to request an update if a new query was received.
1665   void registerForUpdate(AbstractAttribute &AA);
1666 
1667   /// Explicitly record a dependence from \p FromAA to \p ToAA, that is if
1668   /// \p FromAA changes \p ToAA should be updated as well.
1669   ///
1670   /// This method should be used in conjunction with the `getAAFor` method and
1671   /// with the DepClass enum passed to the method set to None. This can
1672   /// be beneficial to avoid false dependences but it requires the users of
1673   /// `getAAFor` to explicitly record true dependences through this method.
1674   /// The \p DepClass flag indicates if the dependence is striclty necessary.
1675   /// That means for required dependences, if \p FromAA changes to an invalid
1676   /// state, \p ToAA can be moved to a pessimistic fixpoint because it required
1677   /// information from \p FromAA but none are available anymore.
1678   void recordDependence(const AbstractAttribute &FromAA,
1679                         const AbstractAttribute &ToAA, DepClassTy DepClass);
1680 
1681   /// Introduce a new abstract attribute into the fixpoint analysis.
1682   ///
1683   /// Note that ownership of the attribute is given to the Attributor. It will
1684   /// invoke delete for the Attributor on destruction of the Attributor.
1685   ///
1686   /// Attributes are identified by their IR position (AAType::getIRPosition())
1687   /// and the address of their static member (see AAType::ID).
1688   template <typename AAType> AAType &registerAA(AAType &AA) {
1689     static_assert(std::is_base_of<AbstractAttribute, AAType>::value,
1690                   "Cannot register an attribute with a type not derived from "
1691                   "'AbstractAttribute'!");
1692     // Put the attribute in the lookup map structure and the container we use to
1693     // keep track of all attributes.
1694     const IRPosition &IRP = AA.getIRPosition();
1695     AbstractAttribute *&AAPtr = AAMap[{&AAType::ID, IRP}];
1696 
1697     assert(!AAPtr && "Attribute already in map!");
1698     AAPtr = &AA;
1699 
1700     // Register AA with the synthetic root only before the manifest stage.
1701     if (Phase == AttributorPhase::SEEDING || Phase == AttributorPhase::UPDATE)
1702       DG.SyntheticRoot.Deps.push_back(
1703           AADepGraphNode::DepTy(&AA, unsigned(DepClassTy::REQUIRED)));
1704 
1705     return AA;
1706   }
1707 
1708   /// Return the internal information cache.
1709   InformationCache &getInfoCache() { return InfoCache; }
1710 
1711   /// Return true if this is a module pass, false otherwise.
1712   bool isModulePass() const { return Configuration.IsModulePass; }
1713 
1714   /// Return true if we derive attributes for \p Fn
1715   bool isRunOn(Function &Fn) const { return isRunOn(&Fn); }
1716   bool isRunOn(Function *Fn) const {
1717     return Functions.empty() || Functions.count(Fn);
1718   }
1719 
1720   /// Determine opportunities to derive 'default' attributes in \p F and create
1721   /// abstract attribute objects for them.
1722   ///
1723   /// \param F The function that is checked for attribute opportunities.
1724   ///
1725   /// Note that abstract attribute instances are generally created even if the
1726   /// IR already contains the information they would deduce. The most important
1727   /// reason for this is the single interface, the one of the abstract attribute
1728   /// instance, which can be queried without the need to look at the IR in
1729   /// various places.
1730   void identifyDefaultAbstractAttributes(Function &F);
1731 
1732   /// Determine whether the function \p F is IPO amendable
1733   ///
1734   /// If a function is exactly defined or it has alwaysinline attribute
1735   /// and is viable to be inlined, we say it is IPO amendable
1736   bool isFunctionIPOAmendable(const Function &F) {
1737     return F.hasExactDefinition() || InfoCache.InlineableFunctions.count(&F);
1738   }
1739 
1740   /// Mark the internal function \p F as live.
1741   ///
1742   /// This will trigger the identification and initialization of attributes for
1743   /// \p F.
1744   void markLiveInternalFunction(const Function &F) {
1745     assert(F.hasLocalLinkage() &&
1746            "Only local linkage is assumed dead initially.");
1747 
1748     if (Configuration.DefaultInitializeLiveInternals)
1749       identifyDefaultAbstractAttributes(const_cast<Function &>(F));
1750     if (Configuration.InitializationCallback)
1751       Configuration.InitializationCallback(*this, F);
1752   }
1753 
1754   /// Helper function to remove callsite.
1755   void removeCallSite(CallInst *CI) {
1756     if (!CI)
1757       return;
1758 
1759     Configuration.CGUpdater.removeCallSite(*CI);
1760   }
1761 
1762   /// Record that \p U is to be replaces with \p NV after information was
1763   /// manifested. This also triggers deletion of trivially dead istructions.
1764   bool changeUseAfterManifest(Use &U, Value &NV) {
1765     Value *&V = ToBeChangedUses[&U];
1766     if (V && (V->stripPointerCasts() == NV.stripPointerCasts() ||
1767               isa_and_nonnull<UndefValue>(V)))
1768       return false;
1769     assert((!V || V == &NV || isa<UndefValue>(NV)) &&
1770            "Use was registered twice for replacement with different values!");
1771     V = &NV;
1772     return true;
1773   }
1774 
1775   /// Helper function to replace all uses associated with \p IRP with \p NV.
1776   /// Return true if there is any change. The flag \p ChangeDroppable indicates
1777   /// if dropppable uses should be changed too.
1778   bool changeAfterManifest(const IRPosition IRP, Value &NV,
1779                            bool ChangeDroppable = true) {
1780     if (IRP.getPositionKind() == IRPosition::IRP_CALL_SITE_ARGUMENT) {
1781       auto *CB = cast<CallBase>(IRP.getCtxI());
1782       return changeUseAfterManifest(
1783           CB->getArgOperandUse(IRP.getCallSiteArgNo()), NV);
1784     }
1785     Value &V = IRP.getAssociatedValue();
1786     auto &Entry = ToBeChangedValues[&V];
1787     Value *CurNV = get<0>(Entry);
1788     if (CurNV && (CurNV->stripPointerCasts() == NV.stripPointerCasts() ||
1789                   isa<UndefValue>(CurNV)))
1790       return false;
1791     assert((!CurNV || CurNV == &NV || isa<UndefValue>(NV)) &&
1792            "Value replacement was registered twice with different values!");
1793     Entry = {&NV, ChangeDroppable};
1794     return true;
1795   }
1796 
1797   /// Record that \p I is to be replaced with `unreachable` after information
1798   /// was manifested.
1799   void changeToUnreachableAfterManifest(Instruction *I) {
1800     ToBeChangedToUnreachableInsts.insert(I);
1801   }
1802 
1803   /// Record that \p II has at least one dead successor block. This information
1804   /// is used, e.g., to replace \p II with a call, after information was
1805   /// manifested.
1806   void registerInvokeWithDeadSuccessor(InvokeInst &II) {
1807     InvokeWithDeadSuccessor.insert(&II);
1808   }
1809 
1810   /// Record that \p I is deleted after information was manifested. This also
1811   /// triggers deletion of trivially dead istructions.
1812   void deleteAfterManifest(Instruction &I) { ToBeDeletedInsts.insert(&I); }
1813 
1814   /// Record that \p BB is deleted after information was manifested. This also
1815   /// triggers deletion of trivially dead istructions.
1816   void deleteAfterManifest(BasicBlock &BB) { ToBeDeletedBlocks.insert(&BB); }
1817 
1818   // Record that \p BB is added during the manifest of an AA. Added basic blocks
1819   // are preserved in the IR.
1820   void registerManifestAddedBasicBlock(BasicBlock &BB) {
1821     ManifestAddedBlocks.insert(&BB);
1822   }
1823 
1824   /// Record that \p F is deleted after information was manifested.
1825   void deleteAfterManifest(Function &F) {
1826     if (Configuration.DeleteFns)
1827       ToBeDeletedFunctions.insert(&F);
1828   }
1829 
1830   /// If \p IRP is assumed to be a constant, return it, if it is unclear yet,
1831   /// return std::nullopt, otherwise return `nullptr`.
1832   std::optional<Constant *> getAssumedConstant(const IRPosition &IRP,
1833                                                const AbstractAttribute &AA,
1834                                                bool &UsedAssumedInformation);
1835   std::optional<Constant *> getAssumedConstant(const Value &V,
1836                                                const AbstractAttribute &AA,
1837                                                bool &UsedAssumedInformation) {
1838     return getAssumedConstant(IRPosition::value(V), AA, UsedAssumedInformation);
1839   }
1840 
1841   /// If \p V is assumed simplified, return it, if it is unclear yet,
1842   /// return std::nullopt, otherwise return `nullptr`.
1843   std::optional<Value *> getAssumedSimplified(const IRPosition &IRP,
1844                                               const AbstractAttribute &AA,
1845                                               bool &UsedAssumedInformation,
1846                                               AA::ValueScope S) {
1847     return getAssumedSimplified(IRP, &AA, UsedAssumedInformation, S);
1848   }
1849   std::optional<Value *> getAssumedSimplified(const Value &V,
1850                                               const AbstractAttribute &AA,
1851                                               bool &UsedAssumedInformation,
1852                                               AA::ValueScope S) {
1853     return getAssumedSimplified(IRPosition::value(V), AA,
1854                                 UsedAssumedInformation, S);
1855   }
1856 
1857   /// If \p V is assumed simplified, return it, if it is unclear yet,
1858   /// return std::nullopt, otherwise return `nullptr`. Same as the public
1859   /// version except that it can be used without recording dependences on any \p
1860   /// AA.
1861   std::optional<Value *> getAssumedSimplified(const IRPosition &V,
1862                                               const AbstractAttribute *AA,
1863                                               bool &UsedAssumedInformation,
1864                                               AA::ValueScope S);
1865 
1866   /// Try to simplify \p IRP and in the scope \p S. If successful, true is
1867   /// returned and all potential values \p IRP can take are put into \p Values.
1868   /// If the result in \p Values contains select or PHI instructions it means
1869   /// those could not be simplified to a single value. Recursive calls with
1870   /// these instructions will yield their respective potential values. If false
1871   /// is returned no other information is valid.
1872   bool getAssumedSimplifiedValues(const IRPosition &IRP,
1873                                   const AbstractAttribute *AA,
1874                                   SmallVectorImpl<AA::ValueAndContext> &Values,
1875                                   AA::ValueScope S,
1876                                   bool &UsedAssumedInformation);
1877 
1878   /// Register \p CB as a simplification callback.
1879   /// `Attributor::getAssumedSimplified` will use these callbacks before
1880   /// we it will ask `AAValueSimplify`. It is important to ensure this
1881   /// is called before `identifyDefaultAbstractAttributes`, assuming the
1882   /// latter is called at all.
1883   using SimplifictionCallbackTy = std::function<std::optional<Value *>(
1884       const IRPosition &, const AbstractAttribute *, bool &)>;
1885   void registerSimplificationCallback(const IRPosition &IRP,
1886                                       const SimplifictionCallbackTy &CB) {
1887     SimplificationCallbacks[IRP].emplace_back(CB);
1888   }
1889 
1890   /// Return true if there is a simplification callback for \p IRP.
1891   bool hasSimplificationCallback(const IRPosition &IRP) {
1892     return SimplificationCallbacks.count(IRP);
1893   }
1894 
1895   using VirtualUseCallbackTy =
1896       std::function<bool(Attributor &, const AbstractAttribute *)>;
1897   void registerVirtualUseCallback(const Value &V,
1898                                   const VirtualUseCallbackTy &CB) {
1899     VirtualUseCallbacks[&V].emplace_back(CB);
1900   }
1901 
1902 private:
1903   /// The vector with all simplification callbacks registered by outside AAs.
1904   DenseMap<IRPosition, SmallVector<SimplifictionCallbackTy, 1>>
1905       SimplificationCallbacks;
1906 
1907   DenseMap<const Value *, SmallVector<VirtualUseCallbackTy, 1>>
1908       VirtualUseCallbacks;
1909 
1910 public:
1911   /// Translate \p V from the callee context into the call site context.
1912   std::optional<Value *>
1913   translateArgumentToCallSiteContent(std::optional<Value *> V, CallBase &CB,
1914                                      const AbstractAttribute &AA,
1915                                      bool &UsedAssumedInformation);
1916 
1917   /// Return true if \p AA (or its context instruction) is assumed dead.
1918   ///
1919   /// If \p LivenessAA is not provided it is queried.
1920   bool isAssumedDead(const AbstractAttribute &AA, const AAIsDead *LivenessAA,
1921                      bool &UsedAssumedInformation,
1922                      bool CheckBBLivenessOnly = false,
1923                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1924 
1925   /// Return true if \p I is assumed dead.
1926   ///
1927   /// If \p LivenessAA is not provided it is queried.
1928   bool isAssumedDead(const Instruction &I, const AbstractAttribute *QueryingAA,
1929                      const AAIsDead *LivenessAA, bool &UsedAssumedInformation,
1930                      bool CheckBBLivenessOnly = false,
1931                      DepClassTy DepClass = DepClassTy::OPTIONAL,
1932                      bool CheckForDeadStore = false);
1933 
1934   /// Return true if \p U is assumed dead.
1935   ///
1936   /// If \p FnLivenessAA is not provided it is queried.
1937   bool isAssumedDead(const Use &U, const AbstractAttribute *QueryingAA,
1938                      const AAIsDead *FnLivenessAA, bool &UsedAssumedInformation,
1939                      bool CheckBBLivenessOnly = false,
1940                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1941 
1942   /// Return true if \p IRP is assumed dead.
1943   ///
1944   /// If \p FnLivenessAA is not provided it is queried.
1945   bool isAssumedDead(const IRPosition &IRP, const AbstractAttribute *QueryingAA,
1946                      const AAIsDead *FnLivenessAA, bool &UsedAssumedInformation,
1947                      bool CheckBBLivenessOnly = false,
1948                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1949 
1950   /// Return true if \p BB is assumed dead.
1951   ///
1952   /// If \p LivenessAA is not provided it is queried.
1953   bool isAssumedDead(const BasicBlock &BB, const AbstractAttribute *QueryingAA,
1954                      const AAIsDead *FnLivenessAA,
1955                      DepClassTy DepClass = DepClassTy::OPTIONAL);
1956 
1957   /// Check \p Pred on all (transitive) uses of \p V.
1958   ///
1959   /// This method will evaluate \p Pred on all (transitive) uses of the
1960   /// associated value and return true if \p Pred holds every time.
1961   /// If uses are skipped in favor of equivalent ones, e.g., if we look through
1962   /// memory, the \p EquivalentUseCB will be used to give the caller an idea
1963   /// what original used was replaced by a new one (or new ones). The visit is
1964   /// cut short if \p EquivalentUseCB returns false and the function will return
1965   /// false as well.
1966   bool checkForAllUses(function_ref<bool(const Use &, bool &)> Pred,
1967                        const AbstractAttribute &QueryingAA, const Value &V,
1968                        bool CheckBBLivenessOnly = false,
1969                        DepClassTy LivenessDepClass = DepClassTy::OPTIONAL,
1970                        bool IgnoreDroppableUses = true,
1971                        function_ref<bool(const Use &OldU, const Use &NewU)>
1972                            EquivalentUseCB = nullptr);
1973 
1974   /// Emit a remark generically.
1975   ///
1976   /// This template function can be used to generically emit a remark. The
1977   /// RemarkKind should be one of the following:
1978   ///   - OptimizationRemark to indicate a successful optimization attempt
1979   ///   - OptimizationRemarkMissed to report a failed optimization attempt
1980   ///   - OptimizationRemarkAnalysis to provide additional information about an
1981   ///     optimization attempt
1982   ///
1983   /// The remark is built using a callback function \p RemarkCB that takes a
1984   /// RemarkKind as input and returns a RemarkKind.
1985   template <typename RemarkKind, typename RemarkCallBack>
1986   void emitRemark(Instruction *I, StringRef RemarkName,
1987                   RemarkCallBack &&RemarkCB) const {
1988     if (!Configuration.OREGetter)
1989       return;
1990 
1991     Function *F = I->getFunction();
1992     auto &ORE = Configuration.OREGetter(F);
1993 
1994     if (RemarkName.startswith("OMP"))
1995       ORE.emit([&]() {
1996         return RemarkCB(RemarkKind(Configuration.PassName, RemarkName, I))
1997                << " [" << RemarkName << "]";
1998       });
1999     else
2000       ORE.emit([&]() {
2001         return RemarkCB(RemarkKind(Configuration.PassName, RemarkName, I));
2002       });
2003   }
2004 
2005   /// Emit a remark on a function.
2006   template <typename RemarkKind, typename RemarkCallBack>
2007   void emitRemark(Function *F, StringRef RemarkName,
2008                   RemarkCallBack &&RemarkCB) const {
2009     if (!Configuration.OREGetter)
2010       return;
2011 
2012     auto &ORE = Configuration.OREGetter(F);
2013 
2014     if (RemarkName.startswith("OMP"))
2015       ORE.emit([&]() {
2016         return RemarkCB(RemarkKind(Configuration.PassName, RemarkName, F))
2017                << " [" << RemarkName << "]";
2018       });
2019     else
2020       ORE.emit([&]() {
2021         return RemarkCB(RemarkKind(Configuration.PassName, RemarkName, F));
2022       });
2023   }
2024 
2025   /// Helper struct used in the communication between an abstract attribute (AA)
2026   /// that wants to change the signature of a function and the Attributor which
2027   /// applies the changes. The struct is partially initialized with the
2028   /// information from the AA (see the constructor). All other members are
2029   /// provided by the Attributor prior to invoking any callbacks.
2030   struct ArgumentReplacementInfo {
2031     /// Callee repair callback type
2032     ///
2033     /// The function repair callback is invoked once to rewire the replacement
2034     /// arguments in the body of the new function. The argument replacement info
2035     /// is passed, as build from the registerFunctionSignatureRewrite call, as
2036     /// well as the replacement function and an iteratore to the first
2037     /// replacement argument.
2038     using CalleeRepairCBTy = std::function<void(
2039         const ArgumentReplacementInfo &, Function &, Function::arg_iterator)>;
2040 
2041     /// Abstract call site (ACS) repair callback type
2042     ///
2043     /// The abstract call site repair callback is invoked once on every abstract
2044     /// call site of the replaced function (\see ReplacedFn). The callback needs
2045     /// to provide the operands for the call to the new replacement function.
2046     /// The number and type of the operands appended to the provided vector
2047     /// (second argument) is defined by the number and types determined through
2048     /// the replacement type vector (\see ReplacementTypes). The first argument
2049     /// is the ArgumentReplacementInfo object registered with the Attributor
2050     /// through the registerFunctionSignatureRewrite call.
2051     using ACSRepairCBTy =
2052         std::function<void(const ArgumentReplacementInfo &, AbstractCallSite,
2053                            SmallVectorImpl<Value *> &)>;
2054 
2055     /// Simple getters, see the corresponding members for details.
2056     ///{
2057 
2058     Attributor &getAttributor() const { return A; }
2059     const Function &getReplacedFn() const { return ReplacedFn; }
2060     const Argument &getReplacedArg() const { return ReplacedArg; }
2061     unsigned getNumReplacementArgs() const { return ReplacementTypes.size(); }
2062     const SmallVectorImpl<Type *> &getReplacementTypes() const {
2063       return ReplacementTypes;
2064     }
2065 
2066     ///}
2067 
2068   private:
2069     /// Constructor that takes the argument to be replaced, the types of
2070     /// the replacement arguments, as well as callbacks to repair the call sites
2071     /// and new function after the replacement happened.
2072     ArgumentReplacementInfo(Attributor &A, Argument &Arg,
2073                             ArrayRef<Type *> ReplacementTypes,
2074                             CalleeRepairCBTy &&CalleeRepairCB,
2075                             ACSRepairCBTy &&ACSRepairCB)
2076         : A(A), ReplacedFn(*Arg.getParent()), ReplacedArg(Arg),
2077           ReplacementTypes(ReplacementTypes.begin(), ReplacementTypes.end()),
2078           CalleeRepairCB(std::move(CalleeRepairCB)),
2079           ACSRepairCB(std::move(ACSRepairCB)) {}
2080 
2081     /// Reference to the attributor to allow access from the callbacks.
2082     Attributor &A;
2083 
2084     /// The "old" function replaced by ReplacementFn.
2085     const Function &ReplacedFn;
2086 
2087     /// The "old" argument replaced by new ones defined via ReplacementTypes.
2088     const Argument &ReplacedArg;
2089 
2090     /// The types of the arguments replacing ReplacedArg.
2091     const SmallVector<Type *, 8> ReplacementTypes;
2092 
2093     /// Callee repair callback, see CalleeRepairCBTy.
2094     const CalleeRepairCBTy CalleeRepairCB;
2095 
2096     /// Abstract call site (ACS) repair callback, see ACSRepairCBTy.
2097     const ACSRepairCBTy ACSRepairCB;
2098 
2099     /// Allow access to the private members from the Attributor.
2100     friend struct Attributor;
2101   };
2102 
2103   /// Check if we can rewrite a function signature.
2104   ///
2105   /// The argument \p Arg is replaced with new ones defined by the number,
2106   /// order, and types in \p ReplacementTypes.
2107   ///
2108   /// \returns True, if the replacement can be registered, via
2109   /// registerFunctionSignatureRewrite, false otherwise.
2110   bool isValidFunctionSignatureRewrite(Argument &Arg,
2111                                        ArrayRef<Type *> ReplacementTypes);
2112 
2113   /// Register a rewrite for a function signature.
2114   ///
2115   /// The argument \p Arg is replaced with new ones defined by the number,
2116   /// order, and types in \p ReplacementTypes. The rewiring at the call sites is
2117   /// done through \p ACSRepairCB and at the callee site through
2118   /// \p CalleeRepairCB.
2119   ///
2120   /// \returns True, if the replacement was registered, false otherwise.
2121   bool registerFunctionSignatureRewrite(
2122       Argument &Arg, ArrayRef<Type *> ReplacementTypes,
2123       ArgumentReplacementInfo::CalleeRepairCBTy &&CalleeRepairCB,
2124       ArgumentReplacementInfo::ACSRepairCBTy &&ACSRepairCB);
2125 
2126   /// Check \p Pred on all function call sites.
2127   ///
2128   /// This method will evaluate \p Pred on call sites and return
2129   /// true if \p Pred holds in every call sites. However, this is only possible
2130   /// all call sites are known, hence the function has internal linkage.
2131   /// If true is returned, \p UsedAssumedInformation is set if assumed
2132   /// information was used to skip or simplify potential call sites.
2133   bool checkForAllCallSites(function_ref<bool(AbstractCallSite)> Pred,
2134                             const AbstractAttribute &QueryingAA,
2135                             bool RequireAllCallSites,
2136                             bool &UsedAssumedInformation);
2137 
2138   /// Check \p Pred on all call sites of \p Fn.
2139   ///
2140   /// This method will evaluate \p Pred on call sites and return
2141   /// true if \p Pred holds in every call sites. However, this is only possible
2142   /// all call sites are known, hence the function has internal linkage.
2143   /// If true is returned, \p UsedAssumedInformation is set if assumed
2144   /// information was used to skip or simplify potential call sites.
2145   bool checkForAllCallSites(function_ref<bool(AbstractCallSite)> Pred,
2146                             const Function &Fn, bool RequireAllCallSites,
2147                             const AbstractAttribute *QueryingAA,
2148                             bool &UsedAssumedInformation,
2149                             bool CheckPotentiallyDead = false);
2150 
2151   /// Check \p Pred on all values potentially returned by \p F.
2152   ///
2153   /// This method will evaluate \p Pred on all values potentially returned by
2154   /// the function associated with \p QueryingAA. The returned values are
2155   /// matched with their respective return instructions. Returns true if \p Pred
2156   /// holds on all of them.
2157   bool checkForAllReturnedValuesAndReturnInsts(
2158       function_ref<bool(Value &, const SmallSetVector<ReturnInst *, 4> &)> Pred,
2159       const AbstractAttribute &QueryingAA);
2160 
2161   /// Check \p Pred on all values potentially returned by the function
2162   /// associated with \p QueryingAA.
2163   ///
2164   /// This is the context insensitive version of the method above.
2165   bool checkForAllReturnedValues(function_ref<bool(Value &)> Pred,
2166                                  const AbstractAttribute &QueryingAA);
2167 
2168   /// Check \p Pred on all instructions in \p Fn with an opcode present in
2169   /// \p Opcodes.
2170   ///
2171   /// This method will evaluate \p Pred on all instructions with an opcode
2172   /// present in \p Opcode and return true if \p Pred holds on all of them.
2173   bool checkForAllInstructions(function_ref<bool(Instruction &)> Pred,
2174                                const Function *Fn,
2175                                const AbstractAttribute &QueryingAA,
2176                                const ArrayRef<unsigned> &Opcodes,
2177                                bool &UsedAssumedInformation,
2178                                bool CheckBBLivenessOnly = false,
2179                                bool CheckPotentiallyDead = false);
2180 
2181   /// Check \p Pred on all instructions with an opcode present in \p Opcodes.
2182   ///
2183   /// This method will evaluate \p Pred on all instructions with an opcode
2184   /// present in \p Opcode and return true if \p Pred holds on all of them.
2185   bool checkForAllInstructions(function_ref<bool(Instruction &)> Pred,
2186                                const AbstractAttribute &QueryingAA,
2187                                const ArrayRef<unsigned> &Opcodes,
2188                                bool &UsedAssumedInformation,
2189                                bool CheckBBLivenessOnly = false,
2190                                bool CheckPotentiallyDead = false);
2191 
2192   /// Check \p Pred on all call-like instructions (=CallBased derived).
2193   ///
2194   /// See checkForAllCallLikeInstructions(...) for more information.
2195   bool checkForAllCallLikeInstructions(function_ref<bool(Instruction &)> Pred,
2196                                        const AbstractAttribute &QueryingAA,
2197                                        bool &UsedAssumedInformation,
2198                                        bool CheckBBLivenessOnly = false,
2199                                        bool CheckPotentiallyDead = false) {
2200     return checkForAllInstructions(
2201         Pred, QueryingAA,
2202         {(unsigned)Instruction::Invoke, (unsigned)Instruction::CallBr,
2203          (unsigned)Instruction::Call},
2204         UsedAssumedInformation, CheckBBLivenessOnly, CheckPotentiallyDead);
2205   }
2206 
2207   /// Check \p Pred on all Read/Write instructions.
2208   ///
2209   /// This method will evaluate \p Pred on all instructions that read or write
2210   /// to memory present in the information cache and return true if \p Pred
2211   /// holds on all of them.
2212   bool checkForAllReadWriteInstructions(function_ref<bool(Instruction &)> Pred,
2213                                         AbstractAttribute &QueryingAA,
2214                                         bool &UsedAssumedInformation);
2215 
2216   /// Create a shallow wrapper for \p F such that \p F has internal linkage
2217   /// afterwards. It also sets the original \p F 's name to anonymous
2218   ///
2219   /// A wrapper is a function with the same type (and attributes) as \p F
2220   /// that will only call \p F and return the result, if any.
2221   ///
2222   /// Assuming the declaration of looks like:
2223   ///   rty F(aty0 arg0, ..., atyN argN);
2224   ///
2225   /// The wrapper will then look as follows:
2226   ///   rty wrapper(aty0 arg0, ..., atyN argN) {
2227   ///     return F(arg0, ..., argN);
2228   ///   }
2229   ///
2230   static void createShallowWrapper(Function &F);
2231 
2232   /// Returns true if the function \p F can be internalized. i.e. it has a
2233   /// compatible linkage.
2234   static bool isInternalizable(Function &F);
2235 
2236   /// Make another copy of the function \p F such that the copied version has
2237   /// internal linkage afterwards and can be analysed. Then we replace all uses
2238   /// of the original function to the copied one
2239   ///
2240   /// Only non-locally linked functions that have `linkonce_odr` or `weak_odr`
2241   /// linkage can be internalized because these linkages guarantee that other
2242   /// definitions with the same name have the same semantics as this one.
2243   ///
2244   /// This will only be run if the `attributor-allow-deep-wrappers` option is
2245   /// set, or if the function is called with \p Force set to true.
2246   ///
2247   /// If the function \p F failed to be internalized the return value will be a
2248   /// null pointer.
2249   static Function *internalizeFunction(Function &F, bool Force = false);
2250 
2251   /// Make copies of each function in the set \p FnSet such that the copied
2252   /// version has internal linkage afterwards and can be analysed. Then we
2253   /// replace all uses of the original function to the copied one. The map
2254   /// \p FnMap contains a mapping of functions to their internalized versions.
2255   ///
2256   /// Only non-locally linked functions that have `linkonce_odr` or `weak_odr`
2257   /// linkage can be internalized because these linkages guarantee that other
2258   /// definitions with the same name have the same semantics as this one.
2259   ///
2260   /// This version will internalize all the functions in the set \p FnSet at
2261   /// once and then replace the uses. This prevents internalized functions being
2262   /// called by external functions when there is an internalized version in the
2263   /// module.
2264   static bool internalizeFunctions(SmallPtrSetImpl<Function *> &FnSet,
2265                                    DenseMap<Function *, Function *> &FnMap);
2266 
2267   /// Return the data layout associated with the anchor scope.
2268   const DataLayout &getDataLayout() const { return InfoCache.DL; }
2269 
2270   /// The allocator used to allocate memory, e.g. for `AbstractAttribute`s.
2271   BumpPtrAllocator &Allocator;
2272 
2273 private:
2274   /// This method will do fixpoint iteration until fixpoint or the
2275   /// maximum iteration count is reached.
2276   ///
2277   /// If the maximum iteration count is reached, This method will
2278   /// indicate pessimistic fixpoint on attributes that transitively depend
2279   /// on attributes that were scheduled for an update.
2280   void runTillFixpoint();
2281 
2282   /// Gets called after scheduling, manifests attributes to the LLVM IR.
2283   ChangeStatus manifestAttributes();
2284 
2285   /// Gets called after attributes have been manifested, cleans up the IR.
2286   /// Deletes dead functions, blocks and instructions.
2287   /// Rewrites function signitures and updates the call graph.
2288   ChangeStatus cleanupIR();
2289 
2290   /// Identify internal functions that are effectively dead, thus not reachable
2291   /// from a live entry point. The functions are added to ToBeDeletedFunctions.
2292   void identifyDeadInternalFunctions();
2293 
2294   /// Run `::update` on \p AA and track the dependences queried while doing so.
2295   /// Also adjust the state if we know further updates are not necessary.
2296   ChangeStatus updateAA(AbstractAttribute &AA);
2297 
2298   /// Remember the dependences on the top of the dependence stack such that they
2299   /// may trigger further updates. (\see DependenceStack)
2300   void rememberDependences();
2301 
2302   /// Determine if CallBase context in \p IRP should be propagated.
2303   bool shouldPropagateCallBaseContext(const IRPosition &IRP);
2304 
2305   /// Apply all requested function signature rewrites
2306   /// (\see registerFunctionSignatureRewrite) and return Changed if the module
2307   /// was altered.
2308   ChangeStatus
2309   rewriteFunctionSignatures(SmallSetVector<Function *, 8> &ModifiedFns);
2310 
2311   /// Check if the Attribute \p AA should be seeded.
2312   /// See getOrCreateAAFor.
2313   bool shouldSeedAttribute(AbstractAttribute &AA);
2314 
2315   /// A nested map to lookup abstract attributes based on the argument position
2316   /// on the outer level, and the addresses of the static member (AAType::ID) on
2317   /// the inner level.
2318   ///{
2319   using AAMapKeyTy = std::pair<const char *, IRPosition>;
2320   DenseMap<AAMapKeyTy, AbstractAttribute *> AAMap;
2321   ///}
2322 
2323   /// Map to remember all requested signature changes (= argument replacements).
2324   DenseMap<Function *, SmallVector<std::unique_ptr<ArgumentReplacementInfo>, 8>>
2325       ArgumentReplacementMap;
2326 
2327   /// The set of functions we are deriving attributes for.
2328   SetVector<Function *> &Functions;
2329 
2330   /// The information cache that holds pre-processed (LLVM-IR) information.
2331   InformationCache &InfoCache;
2332 
2333   /// Abstract Attribute dependency graph
2334   AADepGraph DG;
2335 
2336   /// Set of functions for which we modified the content such that it might
2337   /// impact the call graph.
2338   SmallSetVector<Function *, 8> CGModifiedFunctions;
2339 
2340   /// Information about a dependence. If FromAA is changed ToAA needs to be
2341   /// updated as well.
2342   struct DepInfo {
2343     const AbstractAttribute *FromAA;
2344     const AbstractAttribute *ToAA;
2345     DepClassTy DepClass;
2346   };
2347 
2348   /// The dependence stack is used to track dependences during an
2349   /// `AbstractAttribute::update` call. As `AbstractAttribute::update` can be
2350   /// recursive we might have multiple vectors of dependences in here. The stack
2351   /// size, should be adjusted according to the expected recursion depth and the
2352   /// inner dependence vector size to the expected number of dependences per
2353   /// abstract attribute. Since the inner vectors are actually allocated on the
2354   /// stack we can be generous with their size.
2355   using DependenceVector = SmallVector<DepInfo, 8>;
2356   SmallVector<DependenceVector *, 16> DependenceStack;
2357 
2358   /// A set to remember the functions we already assume to be live and visited.
2359   DenseSet<const Function *> VisitedFunctions;
2360 
2361   /// Uses we replace with a new value after manifest is done. We will remove
2362   /// then trivially dead instructions as well.
2363   SmallMapVector<Use *, Value *, 32> ToBeChangedUses;
2364 
2365   /// Values we replace with a new value after manifest is done. We will remove
2366   /// then trivially dead instructions as well.
2367   SmallMapVector<Value *, PointerIntPair<Value *, 1, bool>, 32>
2368       ToBeChangedValues;
2369 
2370   /// Instructions we replace with `unreachable` insts after manifest is done.
2371   SmallSetVector<WeakVH, 16> ToBeChangedToUnreachableInsts;
2372 
2373   /// Invoke instructions with at least a single dead successor block.
2374   SmallSetVector<WeakVH, 16> InvokeWithDeadSuccessor;
2375 
2376   /// A flag that indicates which stage of the process we are in. Initially, the
2377   /// phase is SEEDING. Phase is changed in `Attributor::run()`
2378   enum class AttributorPhase {
2379     SEEDING,
2380     UPDATE,
2381     MANIFEST,
2382     CLEANUP,
2383   } Phase = AttributorPhase::SEEDING;
2384 
2385   /// The current initialization chain length. Tracked to avoid stack overflows.
2386   unsigned InitializationChainLength = 0;
2387 
2388   /// Functions, blocks, and instructions we delete after manifest is done.
2389   ///
2390   ///{
2391   SmallPtrSet<BasicBlock *, 8> ManifestAddedBlocks;
2392   SmallSetVector<Function *, 8> ToBeDeletedFunctions;
2393   SmallSetVector<BasicBlock *, 8> ToBeDeletedBlocks;
2394   SmallSetVector<WeakVH, 8> ToBeDeletedInsts;
2395   ///}
2396 
2397   /// Container with all the query AAs that requested an update via
2398   /// registerForUpdate.
2399   SmallSetVector<AbstractAttribute *, 16> QueryAAsAwaitingUpdate;
2400 
2401   /// User provided configuration for this Attributor instance.
2402   const AttributorConfig Configuration;
2403 
2404   friend AADepGraph;
2405   friend AttributorCallGraph;
2406 };
2407 
2408 /// An interface to query the internal state of an abstract attribute.
2409 ///
2410 /// The abstract state is a minimal interface that allows the Attributor to
2411 /// communicate with the abstract attributes about their internal state without
2412 /// enforcing or exposing implementation details, e.g., the (existence of an)
2413 /// underlying lattice.
2414 ///
2415 /// It is sufficient to be able to query if a state is (1) valid or invalid, (2)
2416 /// at a fixpoint, and to indicate to the state that (3) an optimistic fixpoint
2417 /// was reached or (4) a pessimistic fixpoint was enforced.
2418 ///
2419 /// All methods need to be implemented by the subclass. For the common use case,
2420 /// a single boolean state or a bit-encoded state, the BooleanState and
2421 /// {Inc,Dec,Bit}IntegerState classes are already provided. An abstract
2422 /// attribute can inherit from them to get the abstract state interface and
2423 /// additional methods to directly modify the state based if needed. See the
2424 /// class comments for help.
2425 struct AbstractState {
2426   virtual ~AbstractState() = default;
2427 
2428   /// Return if this abstract state is in a valid state. If false, no
2429   /// information provided should be used.
2430   virtual bool isValidState() const = 0;
2431 
2432   /// Return if this abstract state is fixed, thus does not need to be updated
2433   /// if information changes as it cannot change itself.
2434   virtual bool isAtFixpoint() const = 0;
2435 
2436   /// Indicate that the abstract state should converge to the optimistic state.
2437   ///
2438   /// This will usually make the optimistically assumed state the known to be
2439   /// true state.
2440   ///
2441   /// \returns ChangeStatus::UNCHANGED as the assumed value should not change.
2442   virtual ChangeStatus indicateOptimisticFixpoint() = 0;
2443 
2444   /// Indicate that the abstract state should converge to the pessimistic state.
2445   ///
2446   /// This will usually revert the optimistically assumed state to the known to
2447   /// be true state.
2448   ///
2449   /// \returns ChangeStatus::CHANGED as the assumed value may change.
2450   virtual ChangeStatus indicatePessimisticFixpoint() = 0;
2451 };
2452 
2453 /// Simple state with integers encoding.
2454 ///
2455 /// The interface ensures that the assumed bits are always a subset of the known
2456 /// bits. Users can only add known bits and, except through adding known bits,
2457 /// they can only remove assumed bits. This should guarantee monotoniticy and
2458 /// thereby the existence of a fixpoint (if used corretly). The fixpoint is
2459 /// reached when the assumed and known state/bits are equal. Users can
2460 /// force/inidicate a fixpoint. If an optimistic one is indicated, the known
2461 /// state will catch up with the assumed one, for a pessimistic fixpoint it is
2462 /// the other way around.
2463 template <typename base_ty, base_ty BestState, base_ty WorstState>
2464 struct IntegerStateBase : public AbstractState {
2465   using base_t = base_ty;
2466 
2467   IntegerStateBase() = default;
2468   IntegerStateBase(base_t Assumed) : Assumed(Assumed) {}
2469 
2470   /// Return the best possible representable state.
2471   static constexpr base_t getBestState() { return BestState; }
2472   static constexpr base_t getBestState(const IntegerStateBase &) {
2473     return getBestState();
2474   }
2475 
2476   /// Return the worst possible representable state.
2477   static constexpr base_t getWorstState() { return WorstState; }
2478   static constexpr base_t getWorstState(const IntegerStateBase &) {
2479     return getWorstState();
2480   }
2481 
2482   /// See AbstractState::isValidState()
2483   /// NOTE: For now we simply pretend that the worst possible state is invalid.
2484   bool isValidState() const override { return Assumed != getWorstState(); }
2485 
2486   /// See AbstractState::isAtFixpoint()
2487   bool isAtFixpoint() const override { return Assumed == Known; }
2488 
2489   /// See AbstractState::indicateOptimisticFixpoint(...)
2490   ChangeStatus indicateOptimisticFixpoint() override {
2491     Known = Assumed;
2492     return ChangeStatus::UNCHANGED;
2493   }
2494 
2495   /// See AbstractState::indicatePessimisticFixpoint(...)
2496   ChangeStatus indicatePessimisticFixpoint() override {
2497     Assumed = Known;
2498     return ChangeStatus::CHANGED;
2499   }
2500 
2501   /// Return the known state encoding
2502   base_t getKnown() const { return Known; }
2503 
2504   /// Return the assumed state encoding.
2505   base_t getAssumed() const { return Assumed; }
2506 
2507   /// Equality for IntegerStateBase.
2508   bool
2509   operator==(const IntegerStateBase<base_t, BestState, WorstState> &R) const {
2510     return this->getAssumed() == R.getAssumed() &&
2511            this->getKnown() == R.getKnown();
2512   }
2513 
2514   /// Inequality for IntegerStateBase.
2515   bool
2516   operator!=(const IntegerStateBase<base_t, BestState, WorstState> &R) const {
2517     return !(*this == R);
2518   }
2519 
2520   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2521   /// intended that only information assumed in both states will be assumed in
2522   /// this one afterwards.
2523   void operator^=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2524     handleNewAssumedValue(R.getAssumed());
2525   }
2526 
2527   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2528   /// intended that information known in either state will be known in
2529   /// this one afterwards.
2530   void operator+=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2531     handleNewKnownValue(R.getKnown());
2532   }
2533 
2534   void operator|=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2535     joinOR(R.getAssumed(), R.getKnown());
2536   }
2537 
2538   void operator&=(const IntegerStateBase<base_t, BestState, WorstState> &R) {
2539     joinAND(R.getAssumed(), R.getKnown());
2540   }
2541 
2542 protected:
2543   /// Handle a new assumed value \p Value. Subtype dependent.
2544   virtual void handleNewAssumedValue(base_t Value) = 0;
2545 
2546   /// Handle a new known value \p Value. Subtype dependent.
2547   virtual void handleNewKnownValue(base_t Value) = 0;
2548 
2549   /// Handle a  value \p Value. Subtype dependent.
2550   virtual void joinOR(base_t AssumedValue, base_t KnownValue) = 0;
2551 
2552   /// Handle a new assumed value \p Value. Subtype dependent.
2553   virtual void joinAND(base_t AssumedValue, base_t KnownValue) = 0;
2554 
2555   /// The known state encoding in an integer of type base_t.
2556   base_t Known = getWorstState();
2557 
2558   /// The assumed state encoding in an integer of type base_t.
2559   base_t Assumed = getBestState();
2560 };
2561 
2562 /// Specialization of the integer state for a bit-wise encoding.
2563 template <typename base_ty = uint32_t, base_ty BestState = ~base_ty(0),
2564           base_ty WorstState = 0>
2565 struct BitIntegerState
2566     : public IntegerStateBase<base_ty, BestState, WorstState> {
2567   using base_t = base_ty;
2568 
2569   /// Return true if the bits set in \p BitsEncoding are "known bits".
2570   bool isKnown(base_t BitsEncoding) const {
2571     return (this->Known & BitsEncoding) == BitsEncoding;
2572   }
2573 
2574   /// Return true if the bits set in \p BitsEncoding are "assumed bits".
2575   bool isAssumed(base_t BitsEncoding) const {
2576     return (this->Assumed & BitsEncoding) == BitsEncoding;
2577   }
2578 
2579   /// Add the bits in \p BitsEncoding to the "known bits".
2580   BitIntegerState &addKnownBits(base_t Bits) {
2581     // Make sure we never miss any "known bits".
2582     this->Assumed |= Bits;
2583     this->Known |= Bits;
2584     return *this;
2585   }
2586 
2587   /// Remove the bits in \p BitsEncoding from the "assumed bits" if not known.
2588   BitIntegerState &removeAssumedBits(base_t BitsEncoding) {
2589     return intersectAssumedBits(~BitsEncoding);
2590   }
2591 
2592   /// Remove the bits in \p BitsEncoding from the "known bits".
2593   BitIntegerState &removeKnownBits(base_t BitsEncoding) {
2594     this->Known = (this->Known & ~BitsEncoding);
2595     return *this;
2596   }
2597 
2598   /// Keep only "assumed bits" also set in \p BitsEncoding but all known ones.
2599   BitIntegerState &intersectAssumedBits(base_t BitsEncoding) {
2600     // Make sure we never loose any "known bits".
2601     this->Assumed = (this->Assumed & BitsEncoding) | this->Known;
2602     return *this;
2603   }
2604 
2605 private:
2606   void handleNewAssumedValue(base_t Value) override {
2607     intersectAssumedBits(Value);
2608   }
2609   void handleNewKnownValue(base_t Value) override { addKnownBits(Value); }
2610   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2611     this->Known |= KnownValue;
2612     this->Assumed |= AssumedValue;
2613   }
2614   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2615     this->Known &= KnownValue;
2616     this->Assumed &= AssumedValue;
2617   }
2618 };
2619 
2620 /// Specialization of the integer state for an increasing value, hence ~0u is
2621 /// the best state and 0 the worst.
2622 template <typename base_ty = uint32_t, base_ty BestState = ~base_ty(0),
2623           base_ty WorstState = 0>
2624 struct IncIntegerState
2625     : public IntegerStateBase<base_ty, BestState, WorstState> {
2626   using super = IntegerStateBase<base_ty, BestState, WorstState>;
2627   using base_t = base_ty;
2628 
2629   IncIntegerState() : super() {}
2630   IncIntegerState(base_t Assumed) : super(Assumed) {}
2631 
2632   /// Return the best possible representable state.
2633   static constexpr base_t getBestState() { return BestState; }
2634   static constexpr base_t
2635   getBestState(const IncIntegerState<base_ty, BestState, WorstState> &) {
2636     return getBestState();
2637   }
2638 
2639   /// Take minimum of assumed and \p Value.
2640   IncIntegerState &takeAssumedMinimum(base_t Value) {
2641     // Make sure we never loose "known value".
2642     this->Assumed = std::max(std::min(this->Assumed, Value), this->Known);
2643     return *this;
2644   }
2645 
2646   /// Take maximum of known and \p Value.
2647   IncIntegerState &takeKnownMaximum(base_t Value) {
2648     // Make sure we never loose "known value".
2649     this->Assumed = std::max(Value, this->Assumed);
2650     this->Known = std::max(Value, this->Known);
2651     return *this;
2652   }
2653 
2654 private:
2655   void handleNewAssumedValue(base_t Value) override {
2656     takeAssumedMinimum(Value);
2657   }
2658   void handleNewKnownValue(base_t Value) override { takeKnownMaximum(Value); }
2659   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2660     this->Known = std::max(this->Known, KnownValue);
2661     this->Assumed = std::max(this->Assumed, AssumedValue);
2662   }
2663   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2664     this->Known = std::min(this->Known, KnownValue);
2665     this->Assumed = std::min(this->Assumed, AssumedValue);
2666   }
2667 };
2668 
2669 /// Specialization of the integer state for a decreasing value, hence 0 is the
2670 /// best state and ~0u the worst.
2671 template <typename base_ty = uint32_t>
2672 struct DecIntegerState : public IntegerStateBase<base_ty, 0, ~base_ty(0)> {
2673   using base_t = base_ty;
2674 
2675   /// Take maximum of assumed and \p Value.
2676   DecIntegerState &takeAssumedMaximum(base_t Value) {
2677     // Make sure we never loose "known value".
2678     this->Assumed = std::min(std::max(this->Assumed, Value), this->Known);
2679     return *this;
2680   }
2681 
2682   /// Take minimum of known and \p Value.
2683   DecIntegerState &takeKnownMinimum(base_t Value) {
2684     // Make sure we never loose "known value".
2685     this->Assumed = std::min(Value, this->Assumed);
2686     this->Known = std::min(Value, this->Known);
2687     return *this;
2688   }
2689 
2690 private:
2691   void handleNewAssumedValue(base_t Value) override {
2692     takeAssumedMaximum(Value);
2693   }
2694   void handleNewKnownValue(base_t Value) override { takeKnownMinimum(Value); }
2695   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2696     this->Assumed = std::min(this->Assumed, KnownValue);
2697     this->Assumed = std::min(this->Assumed, AssumedValue);
2698   }
2699   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2700     this->Assumed = std::max(this->Assumed, KnownValue);
2701     this->Assumed = std::max(this->Assumed, AssumedValue);
2702   }
2703 };
2704 
2705 /// Simple wrapper for a single bit (boolean) state.
2706 struct BooleanState : public IntegerStateBase<bool, true, false> {
2707   using super = IntegerStateBase<bool, true, false>;
2708   using base_t = IntegerStateBase::base_t;
2709 
2710   BooleanState() = default;
2711   BooleanState(base_t Assumed) : super(Assumed) {}
2712 
2713   /// Set the assumed value to \p Value but never below the known one.
2714   void setAssumed(bool Value) { Assumed &= (Known | Value); }
2715 
2716   /// Set the known and asssumed value to \p Value.
2717   void setKnown(bool Value) {
2718     Known |= Value;
2719     Assumed |= Value;
2720   }
2721 
2722   /// Return true if the state is assumed to hold.
2723   bool isAssumed() const { return getAssumed(); }
2724 
2725   /// Return true if the state is known to hold.
2726   bool isKnown() const { return getKnown(); }
2727 
2728 private:
2729   void handleNewAssumedValue(base_t Value) override {
2730     if (!Value)
2731       Assumed = Known;
2732   }
2733   void handleNewKnownValue(base_t Value) override {
2734     if (Value)
2735       Known = (Assumed = Value);
2736   }
2737   void joinOR(base_t AssumedValue, base_t KnownValue) override {
2738     Known |= KnownValue;
2739     Assumed |= AssumedValue;
2740   }
2741   void joinAND(base_t AssumedValue, base_t KnownValue) override {
2742     Known &= KnownValue;
2743     Assumed &= AssumedValue;
2744   }
2745 };
2746 
2747 /// State for an integer range.
2748 struct IntegerRangeState : public AbstractState {
2749 
2750   /// Bitwidth of the associated value.
2751   uint32_t BitWidth;
2752 
2753   /// State representing assumed range, initially set to empty.
2754   ConstantRange Assumed;
2755 
2756   /// State representing known range, initially set to [-inf, inf].
2757   ConstantRange Known;
2758 
2759   IntegerRangeState(uint32_t BitWidth)
2760       : BitWidth(BitWidth), Assumed(ConstantRange::getEmpty(BitWidth)),
2761         Known(ConstantRange::getFull(BitWidth)) {}
2762 
2763   IntegerRangeState(const ConstantRange &CR)
2764       : BitWidth(CR.getBitWidth()), Assumed(CR),
2765         Known(getWorstState(CR.getBitWidth())) {}
2766 
2767   /// Return the worst possible representable state.
2768   static ConstantRange getWorstState(uint32_t BitWidth) {
2769     return ConstantRange::getFull(BitWidth);
2770   }
2771 
2772   /// Return the best possible representable state.
2773   static ConstantRange getBestState(uint32_t BitWidth) {
2774     return ConstantRange::getEmpty(BitWidth);
2775   }
2776   static ConstantRange getBestState(const IntegerRangeState &IRS) {
2777     return getBestState(IRS.getBitWidth());
2778   }
2779 
2780   /// Return associated values' bit width.
2781   uint32_t getBitWidth() const { return BitWidth; }
2782 
2783   /// See AbstractState::isValidState()
2784   bool isValidState() const override {
2785     return BitWidth > 0 && !Assumed.isFullSet();
2786   }
2787 
2788   /// See AbstractState::isAtFixpoint()
2789   bool isAtFixpoint() const override { return Assumed == Known; }
2790 
2791   /// See AbstractState::indicateOptimisticFixpoint(...)
2792   ChangeStatus indicateOptimisticFixpoint() override {
2793     Known = Assumed;
2794     return ChangeStatus::CHANGED;
2795   }
2796 
2797   /// See AbstractState::indicatePessimisticFixpoint(...)
2798   ChangeStatus indicatePessimisticFixpoint() override {
2799     Assumed = Known;
2800     return ChangeStatus::CHANGED;
2801   }
2802 
2803   /// Return the known state encoding
2804   ConstantRange getKnown() const { return Known; }
2805 
2806   /// Return the assumed state encoding.
2807   ConstantRange getAssumed() const { return Assumed; }
2808 
2809   /// Unite assumed range with the passed state.
2810   void unionAssumed(const ConstantRange &R) {
2811     // Don't loose a known range.
2812     Assumed = Assumed.unionWith(R).intersectWith(Known);
2813   }
2814 
2815   /// See IntegerRangeState::unionAssumed(..).
2816   void unionAssumed(const IntegerRangeState &R) {
2817     unionAssumed(R.getAssumed());
2818   }
2819 
2820   /// Intersect known range with the passed state.
2821   void intersectKnown(const ConstantRange &R) {
2822     Assumed = Assumed.intersectWith(R);
2823     Known = Known.intersectWith(R);
2824   }
2825 
2826   /// See IntegerRangeState::intersectKnown(..).
2827   void intersectKnown(const IntegerRangeState &R) {
2828     intersectKnown(R.getKnown());
2829   }
2830 
2831   /// Equality for IntegerRangeState.
2832   bool operator==(const IntegerRangeState &R) const {
2833     return getAssumed() == R.getAssumed() && getKnown() == R.getKnown();
2834   }
2835 
2836   /// "Clamp" this state with \p R. The result is subtype dependent but it is
2837   /// intended that only information assumed in both states will be assumed in
2838   /// this one afterwards.
2839   IntegerRangeState operator^=(const IntegerRangeState &R) {
2840     // NOTE: `^=` operator seems like `intersect` but in this case, we need to
2841     // take `union`.
2842     unionAssumed(R);
2843     return *this;
2844   }
2845 
2846   IntegerRangeState operator&=(const IntegerRangeState &R) {
2847     // NOTE: `&=` operator seems like `intersect` but in this case, we need to
2848     // take `union`.
2849     Known = Known.unionWith(R.getKnown());
2850     Assumed = Assumed.unionWith(R.getAssumed());
2851     return *this;
2852   }
2853 };
2854 
2855 /// Simple state for a set.
2856 ///
2857 /// This represents a state containing a set of values. The interface supports
2858 /// modelling sets that contain all possible elements. The state's internal
2859 /// value is modified using union or intersection operations.
2860 template <typename BaseTy> struct SetState : public AbstractState {
2861   /// A wrapper around a set that has semantics for handling unions and
2862   /// intersections with a "universal" set that contains all elements.
2863   struct SetContents {
2864     /// Creates a universal set with no concrete elements or an empty set.
2865     SetContents(bool Universal) : Universal(Universal) {}
2866 
2867     /// Creates a non-universal set with concrete values.
2868     SetContents(const DenseSet<BaseTy> &Assumptions)
2869         : Universal(false), Set(Assumptions) {}
2870 
2871     SetContents(bool Universal, const DenseSet<BaseTy> &Assumptions)
2872         : Universal(Universal), Set(Assumptions) {}
2873 
2874     const DenseSet<BaseTy> &getSet() const { return Set; }
2875 
2876     bool isUniversal() const { return Universal; }
2877 
2878     bool empty() const { return Set.empty() && !Universal; }
2879 
2880     /// Finds A := A ^ B where A or B could be the "Universal" set which
2881     /// contains every possible attribute. Returns true if changes were made.
2882     bool getIntersection(const SetContents &RHS) {
2883       bool IsUniversal = Universal;
2884       unsigned Size = Set.size();
2885 
2886       // A := A ^ U = A
2887       if (RHS.isUniversal())
2888         return false;
2889 
2890       // A := U ^ B = B
2891       if (Universal)
2892         Set = RHS.getSet();
2893       else
2894         set_intersect(Set, RHS.getSet());
2895 
2896       Universal &= RHS.isUniversal();
2897       return IsUniversal != Universal || Size != Set.size();
2898     }
2899 
2900     /// Finds A := A u B where A or B could be the "Universal" set which
2901     /// contains every possible attribute. returns true if changes were made.
2902     bool getUnion(const SetContents &RHS) {
2903       bool IsUniversal = Universal;
2904       unsigned Size = Set.size();
2905 
2906       // A := A u U = U = U u B
2907       if (!RHS.isUniversal() && !Universal)
2908         set_union(Set, RHS.getSet());
2909 
2910       Universal |= RHS.isUniversal();
2911       return IsUniversal != Universal || Size != Set.size();
2912     }
2913 
2914   private:
2915     /// Indicates if this set is "universal", containing every possible element.
2916     bool Universal;
2917 
2918     /// The set of currently active assumptions.
2919     DenseSet<BaseTy> Set;
2920   };
2921 
2922   SetState() : Known(false), Assumed(true), IsAtFixedpoint(false) {}
2923 
2924   /// Initializes the known state with an initial set and initializes the
2925   /// assumed state as universal.
2926   SetState(const DenseSet<BaseTy> &Known)
2927       : Known(Known), Assumed(true), IsAtFixedpoint(false) {}
2928 
2929   /// See AbstractState::isValidState()
2930   bool isValidState() const override { return !Assumed.empty(); }
2931 
2932   /// See AbstractState::isAtFixpoint()
2933   bool isAtFixpoint() const override { return IsAtFixedpoint; }
2934 
2935   /// See AbstractState::indicateOptimisticFixpoint(...)
2936   ChangeStatus indicateOptimisticFixpoint() override {
2937     IsAtFixedpoint = true;
2938     Known = Assumed;
2939     return ChangeStatus::UNCHANGED;
2940   }
2941 
2942   /// See AbstractState::indicatePessimisticFixpoint(...)
2943   ChangeStatus indicatePessimisticFixpoint() override {
2944     IsAtFixedpoint = true;
2945     Assumed = Known;
2946     return ChangeStatus::CHANGED;
2947   }
2948 
2949   /// Return the known state encoding.
2950   const SetContents &getKnown() const { return Known; }
2951 
2952   /// Return the assumed state encoding.
2953   const SetContents &getAssumed() const { return Assumed; }
2954 
2955   /// Returns if the set state contains the element.
2956   bool setContains(const BaseTy &Elem) const {
2957     return Assumed.getSet().contains(Elem) || Known.getSet().contains(Elem);
2958   }
2959 
2960   /// Performs the set intersection between this set and \p RHS. Returns true if
2961   /// changes were made.
2962   bool getIntersection(const SetContents &RHS) {
2963     unsigned SizeBefore = Assumed.getSet().size();
2964 
2965     // Get intersection and make sure that the known set is still a proper
2966     // subset of the assumed set. A := K u (A ^ R).
2967     Assumed.getIntersection(RHS);
2968     Assumed.getUnion(Known);
2969 
2970     return SizeBefore != Assumed.getSet().size();
2971   }
2972 
2973   /// Performs the set union between this set and \p RHS. Returns true if
2974   /// changes were made.
2975   bool getUnion(const SetContents &RHS) { return Assumed.getUnion(RHS); }
2976 
2977 private:
2978   /// The set of values known for this state.
2979   SetContents Known;
2980 
2981   /// The set of assumed values for this state.
2982   SetContents Assumed;
2983 
2984   bool IsAtFixedpoint;
2985 };
2986 
2987 /// Helper struct necessary as the modular build fails if the virtual method
2988 /// IRAttribute::manifest is defined in the Attributor.cpp.
2989 struct IRAttributeManifest {
2990   static ChangeStatus manifestAttrs(Attributor &A, const IRPosition &IRP,
2991                                     const ArrayRef<Attribute> &DeducedAttrs,
2992                                     bool ForceReplace = false);
2993 };
2994 
2995 /// Helper to tie a abstract state implementation to an abstract attribute.
2996 template <typename StateTy, typename BaseType, class... Ts>
2997 struct StateWrapper : public BaseType, public StateTy {
2998   /// Provide static access to the type of the state.
2999   using StateType = StateTy;
3000 
3001   StateWrapper(const IRPosition &IRP, Ts... Args)
3002       : BaseType(IRP), StateTy(Args...) {}
3003 
3004   /// See AbstractAttribute::getState(...).
3005   StateType &getState() override { return *this; }
3006 
3007   /// See AbstractAttribute::getState(...).
3008   const StateType &getState() const override { return *this; }
3009 };
3010 
3011 /// Helper class that provides common functionality to manifest IR attributes.
3012 template <Attribute::AttrKind AK, typename BaseType>
3013 struct IRAttribute : public BaseType {
3014   IRAttribute(const IRPosition &IRP) : BaseType(IRP) {}
3015 
3016   /// See AbstractAttribute::initialize(...).
3017   void initialize(Attributor &A) override {
3018     const IRPosition &IRP = this->getIRPosition();
3019     if (isa<UndefValue>(IRP.getAssociatedValue()) ||
3020         this->hasAttr(getAttrKind(), /* IgnoreSubsumingPositions */ false,
3021                       &A)) {
3022       this->getState().indicateOptimisticFixpoint();
3023       return;
3024     }
3025 
3026     bool IsFnInterface = IRP.isFnInterfaceKind();
3027     const Function *FnScope = IRP.getAnchorScope();
3028     // TODO: Not all attributes require an exact definition. Find a way to
3029     //       enable deduction for some but not all attributes in case the
3030     //       definition might be changed at runtime, see also
3031     //       http://lists.llvm.org/pipermail/llvm-dev/2018-February/121275.html.
3032     // TODO: We could always determine abstract attributes and if sufficient
3033     //       information was found we could duplicate the functions that do not
3034     //       have an exact definition.
3035     if (IsFnInterface && (!FnScope || !A.isFunctionIPOAmendable(*FnScope)))
3036       this->getState().indicatePessimisticFixpoint();
3037   }
3038 
3039   /// See AbstractAttribute::manifest(...).
3040   ChangeStatus manifest(Attributor &A) override {
3041     if (isa<UndefValue>(this->getIRPosition().getAssociatedValue()))
3042       return ChangeStatus::UNCHANGED;
3043     SmallVector<Attribute, 4> DeducedAttrs;
3044     getDeducedAttributes(this->getAnchorValue().getContext(), DeducedAttrs);
3045     return IRAttributeManifest::manifestAttrs(A, this->getIRPosition(),
3046                                               DeducedAttrs);
3047   }
3048 
3049   /// Return the kind that identifies the abstract attribute implementation.
3050   Attribute::AttrKind getAttrKind() const { return AK; }
3051 
3052   /// Return the deduced attributes in \p Attrs.
3053   virtual void getDeducedAttributes(LLVMContext &Ctx,
3054                                     SmallVectorImpl<Attribute> &Attrs) const {
3055     Attrs.emplace_back(Attribute::get(Ctx, getAttrKind()));
3056   }
3057 };
3058 
3059 /// Base struct for all "concrete attribute" deductions.
3060 ///
3061 /// The abstract attribute is a minimal interface that allows the Attributor to
3062 /// orchestrate the abstract/fixpoint analysis. The design allows to hide away
3063 /// implementation choices made for the subclasses but also to structure their
3064 /// implementation and simplify the use of other abstract attributes in-flight.
3065 ///
3066 /// To allow easy creation of new attributes, most methods have default
3067 /// implementations. The ones that do not are generally straight forward, except
3068 /// `AbstractAttribute::updateImpl` which is the location of most reasoning
3069 /// associated with the abstract attribute. The update is invoked by the
3070 /// Attributor in case the situation used to justify the current optimistic
3071 /// state might have changed. The Attributor determines this automatically
3072 /// by monitoring the `Attributor::getAAFor` calls made by abstract attributes.
3073 ///
3074 /// The `updateImpl` method should inspect the IR and other abstract attributes
3075 /// in-flight to justify the best possible (=optimistic) state. The actual
3076 /// implementation is, similar to the underlying abstract state encoding, not
3077 /// exposed. In the most common case, the `updateImpl` will go through a list of
3078 /// reasons why its optimistic state is valid given the current information. If
3079 /// any combination of them holds and is sufficient to justify the current
3080 /// optimistic state, the method shall return UNCHAGED. If not, the optimistic
3081 /// state is adjusted to the situation and the method shall return CHANGED.
3082 ///
3083 /// If the manifestation of the "concrete attribute" deduced by the subclass
3084 /// differs from the "default" behavior, which is a (set of) LLVM-IR
3085 /// attribute(s) for an argument, call site argument, function return value, or
3086 /// function, the `AbstractAttribute::manifest` method should be overloaded.
3087 ///
3088 /// NOTE: If the state obtained via getState() is INVALID, thus if
3089 ///       AbstractAttribute::getState().isValidState() returns false, no
3090 ///       information provided by the methods of this class should be used.
3091 /// NOTE: The Attributor currently has certain limitations to what we can do.
3092 ///       As a general rule of thumb, "concrete" abstract attributes should *for
3093 ///       now* only perform "backward" information propagation. That means
3094 ///       optimistic information obtained through abstract attributes should
3095 ///       only be used at positions that precede the origin of the information
3096 ///       with regards to the program flow. More practically, information can
3097 ///       *now* be propagated from instructions to their enclosing function, but
3098 ///       *not* from call sites to the called function. The mechanisms to allow
3099 ///       both directions will be added in the future.
3100 /// NOTE: The mechanics of adding a new "concrete" abstract attribute are
3101 ///       described in the file comment.
3102 struct AbstractAttribute : public IRPosition, public AADepGraphNode {
3103   using StateType = AbstractState;
3104 
3105   AbstractAttribute(const IRPosition &IRP) : IRPosition(IRP) {}
3106 
3107   /// Virtual destructor.
3108   virtual ~AbstractAttribute() = default;
3109 
3110   /// This function is used to identify if an \p DGN is of type
3111   /// AbstractAttribute so that the dyn_cast and cast can use such information
3112   /// to cast an AADepGraphNode to an AbstractAttribute.
3113   ///
3114   /// We eagerly return true here because all AADepGraphNodes except for the
3115   /// Synthethis Node are of type AbstractAttribute
3116   static bool classof(const AADepGraphNode *DGN) { return true; }
3117 
3118   /// Initialize the state with the information in the Attributor \p A.
3119   ///
3120   /// This function is called by the Attributor once all abstract attributes
3121   /// have been identified. It can and shall be used for task like:
3122   ///  - identify existing knowledge in the IR and use it for the "known state"
3123   ///  - perform any work that is not going to change over time, e.g., determine
3124   ///    a subset of the IR, or attributes in-flight, that have to be looked at
3125   ///    in the `updateImpl` method.
3126   virtual void initialize(Attributor &A) {}
3127 
3128   /// A query AA is always scheduled as long as we do updates because it does
3129   /// lazy computation that cannot be determined to be done from the outside.
3130   /// However, while query AAs will not be fixed if they do not have outstanding
3131   /// dependences, we will only schedule them like other AAs. If a query AA that
3132   /// received a new query it needs to request an update via
3133   /// `Attributor::requestUpdateForAA`.
3134   virtual bool isQueryAA() const { return false; }
3135 
3136   /// Return the internal abstract state for inspection.
3137   virtual StateType &getState() = 0;
3138   virtual const StateType &getState() const = 0;
3139 
3140   /// Return an IR position, see struct IRPosition.
3141   const IRPosition &getIRPosition() const { return *this; };
3142   IRPosition &getIRPosition() { return *this; };
3143 
3144   /// Helper functions, for debug purposes only.
3145   ///{
3146   void print(raw_ostream &OS) const override;
3147   virtual void printWithDeps(raw_ostream &OS) const;
3148   void dump() const { print(dbgs()); }
3149 
3150   /// This function should return the "summarized" assumed state as string.
3151   virtual const std::string getAsStr() const = 0;
3152 
3153   /// This function should return the name of the AbstractAttribute
3154   virtual const std::string getName() const = 0;
3155 
3156   /// This function should return the address of the ID of the AbstractAttribute
3157   virtual const char *getIdAddr() const = 0;
3158   ///}
3159 
3160   /// Allow the Attributor access to the protected methods.
3161   friend struct Attributor;
3162 
3163 protected:
3164   /// Hook for the Attributor to trigger an update of the internal state.
3165   ///
3166   /// If this attribute is already fixed, this method will return UNCHANGED,
3167   /// otherwise it delegates to `AbstractAttribute::updateImpl`.
3168   ///
3169   /// \Return CHANGED if the internal state changed, otherwise UNCHANGED.
3170   ChangeStatus update(Attributor &A);
3171 
3172   /// Hook for the Attributor to trigger the manifestation of the information
3173   /// represented by the abstract attribute in the LLVM-IR.
3174   ///
3175   /// \Return CHANGED if the IR was altered, otherwise UNCHANGED.
3176   virtual ChangeStatus manifest(Attributor &A) {
3177     return ChangeStatus::UNCHANGED;
3178   }
3179 
3180   /// Hook to enable custom statistic tracking, called after manifest that
3181   /// resulted in a change if statistics are enabled.
3182   ///
3183   /// We require subclasses to provide an implementation so we remember to
3184   /// add statistics for them.
3185   virtual void trackStatistics() const = 0;
3186 
3187   /// The actual update/transfer function which has to be implemented by the
3188   /// derived classes.
3189   ///
3190   /// If it is called, the environment has changed and we have to determine if
3191   /// the current information is still valid or adjust it otherwise.
3192   ///
3193   /// \Return CHANGED if the internal state changed, otherwise UNCHANGED.
3194   virtual ChangeStatus updateImpl(Attributor &A) = 0;
3195 };
3196 
3197 /// Forward declarations of output streams for debug purposes.
3198 ///
3199 ///{
3200 raw_ostream &operator<<(raw_ostream &OS, const AbstractAttribute &AA);
3201 raw_ostream &operator<<(raw_ostream &OS, ChangeStatus S);
3202 raw_ostream &operator<<(raw_ostream &OS, IRPosition::Kind);
3203 raw_ostream &operator<<(raw_ostream &OS, const IRPosition &);
3204 raw_ostream &operator<<(raw_ostream &OS, const AbstractState &State);
3205 template <typename base_ty, base_ty BestState, base_ty WorstState>
3206 raw_ostream &
3207 operator<<(raw_ostream &OS,
3208            const IntegerStateBase<base_ty, BestState, WorstState> &S) {
3209   return OS << "(" << S.getKnown() << "-" << S.getAssumed() << ")"
3210             << static_cast<const AbstractState &>(S);
3211 }
3212 raw_ostream &operator<<(raw_ostream &OS, const IntegerRangeState &State);
3213 ///}
3214 
3215 struct AttributorPass : public PassInfoMixin<AttributorPass> {
3216   PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM);
3217 };
3218 struct AttributorCGSCCPass : public PassInfoMixin<AttributorCGSCCPass> {
3219   PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
3220                         LazyCallGraph &CG, CGSCCUpdateResult &UR);
3221 };
3222 
3223 Pass *createAttributorLegacyPass();
3224 Pass *createAttributorCGSCCLegacyPass();
3225 
3226 /// Helper function to clamp a state \p S of type \p StateType with the
3227 /// information in \p R and indicate/return if \p S did change (as-in update is
3228 /// required to be run again).
3229 template <typename StateType>
3230 ChangeStatus clampStateAndIndicateChange(StateType &S, const StateType &R) {
3231   auto Assumed = S.getAssumed();
3232   S ^= R;
3233   return Assumed == S.getAssumed() ? ChangeStatus::UNCHANGED
3234                                    : ChangeStatus::CHANGED;
3235 }
3236 
3237 /// ----------------------------------------------------------------------------
3238 ///                       Abstract Attribute Classes
3239 /// ----------------------------------------------------------------------------
3240 
3241 /// An abstract attribute for the returned values of a function.
3242 struct AAReturnedValues
3243     : public IRAttribute<Attribute::Returned, AbstractAttribute> {
3244   AAReturnedValues(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3245 
3246   /// Check \p Pred on all returned values.
3247   ///
3248   /// This method will evaluate \p Pred on returned values and return
3249   /// true if (1) all returned values are known, and (2) \p Pred returned true
3250   /// for all returned values.
3251   ///
3252   /// Note: Unlike the Attributor::checkForAllReturnedValuesAndReturnInsts
3253   /// method, this one will not filter dead return instructions.
3254   virtual bool checkForAllReturnedValuesAndReturnInsts(
3255       function_ref<bool(Value &, const SmallSetVector<ReturnInst *, 4> &)> Pred)
3256       const = 0;
3257 
3258   using iterator =
3259       MapVector<Value *, SmallSetVector<ReturnInst *, 4>>::iterator;
3260   using const_iterator =
3261       MapVector<Value *, SmallSetVector<ReturnInst *, 4>>::const_iterator;
3262   virtual llvm::iterator_range<iterator> returned_values() = 0;
3263   virtual llvm::iterator_range<const_iterator> returned_values() const = 0;
3264 
3265   virtual size_t getNumReturnValues() const = 0;
3266 
3267   /// Create an abstract attribute view for the position \p IRP.
3268   static AAReturnedValues &createForPosition(const IRPosition &IRP,
3269                                              Attributor &A);
3270 
3271   /// See AbstractAttribute::getName()
3272   const std::string getName() const override { return "AAReturnedValues"; }
3273 
3274   /// See AbstractAttribute::getIdAddr()
3275   const char *getIdAddr() const override { return &ID; }
3276 
3277   /// This function should return true if the type of the \p AA is
3278   /// AAReturnedValues
3279   static bool classof(const AbstractAttribute *AA) {
3280     return (AA->getIdAddr() == &ID);
3281   }
3282 
3283   /// Unique ID (due to the unique address)
3284   static const char ID;
3285 };
3286 
3287 struct AANoUnwind
3288     : public IRAttribute<Attribute::NoUnwind,
3289                          StateWrapper<BooleanState, AbstractAttribute>> {
3290   AANoUnwind(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3291 
3292   /// Returns true if nounwind is assumed.
3293   bool isAssumedNoUnwind() const { return getAssumed(); }
3294 
3295   /// Returns true if nounwind is known.
3296   bool isKnownNoUnwind() const { return getKnown(); }
3297 
3298   /// Create an abstract attribute view for the position \p IRP.
3299   static AANoUnwind &createForPosition(const IRPosition &IRP, Attributor &A);
3300 
3301   /// See AbstractAttribute::getName()
3302   const std::string getName() const override { return "AANoUnwind"; }
3303 
3304   /// See AbstractAttribute::getIdAddr()
3305   const char *getIdAddr() const override { return &ID; }
3306 
3307   /// This function should return true if the type of the \p AA is AANoUnwind
3308   static bool classof(const AbstractAttribute *AA) {
3309     return (AA->getIdAddr() == &ID);
3310   }
3311 
3312   /// Unique ID (due to the unique address)
3313   static const char ID;
3314 };
3315 
3316 struct AANoSync
3317     : public IRAttribute<Attribute::NoSync,
3318                          StateWrapper<BooleanState, AbstractAttribute>> {
3319   AANoSync(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3320 
3321   /// Returns true if "nosync" is assumed.
3322   bool isAssumedNoSync() const { return getAssumed(); }
3323 
3324   /// Returns true if "nosync" is known.
3325   bool isKnownNoSync() const { return getKnown(); }
3326 
3327   /// Helper function used to determine whether an instruction is non-relaxed
3328   /// atomic. In other words, if an atomic instruction does not have unordered
3329   /// or monotonic ordering
3330   static bool isNonRelaxedAtomic(const Instruction *I);
3331 
3332   /// Helper function specific for intrinsics which are potentially volatile.
3333   static bool isNoSyncIntrinsic(const Instruction *I);
3334 
3335   /// Helper function to determine if \p CB is an aligned (GPU) barrier. Aligned
3336   /// barriers have to be executed by all threads. The flag \p ExecutedAligned
3337   /// indicates if the call is executed by all threads in a (thread) block in an
3338   /// aligned way. If that is the case, non-aligned barriers are effectively
3339   /// aligned barriers.
3340   static bool isAlignedBarrier(const CallBase &CB, bool ExecutedAligned);
3341 
3342   /// Create an abstract attribute view for the position \p IRP.
3343   static AANoSync &createForPosition(const IRPosition &IRP, Attributor &A);
3344 
3345   /// See AbstractAttribute::getName()
3346   const std::string getName() const override { return "AANoSync"; }
3347 
3348   /// See AbstractAttribute::getIdAddr()
3349   const char *getIdAddr() const override { return &ID; }
3350 
3351   /// This function should return true if the type of the \p AA is AANoSync
3352   static bool classof(const AbstractAttribute *AA) {
3353     return (AA->getIdAddr() == &ID);
3354   }
3355 
3356   /// Unique ID (due to the unique address)
3357   static const char ID;
3358 };
3359 
3360 /// An abstract interface for all nonnull attributes.
3361 struct AANonNull
3362     : public IRAttribute<Attribute::NonNull,
3363                          StateWrapper<BooleanState, AbstractAttribute>> {
3364   AANonNull(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3365 
3366   /// Return true if we assume that the underlying value is nonnull.
3367   bool isAssumedNonNull() const { return getAssumed(); }
3368 
3369   /// Return true if we know that underlying value is nonnull.
3370   bool isKnownNonNull() const { return getKnown(); }
3371 
3372   /// Create an abstract attribute view for the position \p IRP.
3373   static AANonNull &createForPosition(const IRPosition &IRP, Attributor &A);
3374 
3375   /// See AbstractAttribute::getName()
3376   const std::string getName() const override { return "AANonNull"; }
3377 
3378   /// See AbstractAttribute::getIdAddr()
3379   const char *getIdAddr() const override { return &ID; }
3380 
3381   /// This function should return true if the type of the \p AA is AANonNull
3382   static bool classof(const AbstractAttribute *AA) {
3383     return (AA->getIdAddr() == &ID);
3384   }
3385 
3386   /// Unique ID (due to the unique address)
3387   static const char ID;
3388 };
3389 
3390 /// An abstract attribute for norecurse.
3391 struct AANoRecurse
3392     : public IRAttribute<Attribute::NoRecurse,
3393                          StateWrapper<BooleanState, AbstractAttribute>> {
3394   AANoRecurse(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3395 
3396   /// Return true if "norecurse" is assumed.
3397   bool isAssumedNoRecurse() const { return getAssumed(); }
3398 
3399   /// Return true if "norecurse" is known.
3400   bool isKnownNoRecurse() const { return getKnown(); }
3401 
3402   /// Create an abstract attribute view for the position \p IRP.
3403   static AANoRecurse &createForPosition(const IRPosition &IRP, Attributor &A);
3404 
3405   /// See AbstractAttribute::getName()
3406   const std::string getName() const override { return "AANoRecurse"; }
3407 
3408   /// See AbstractAttribute::getIdAddr()
3409   const char *getIdAddr() const override { return &ID; }
3410 
3411   /// This function should return true if the type of the \p AA is AANoRecurse
3412   static bool classof(const AbstractAttribute *AA) {
3413     return (AA->getIdAddr() == &ID);
3414   }
3415 
3416   /// Unique ID (due to the unique address)
3417   static const char ID;
3418 };
3419 
3420 /// An abstract attribute for willreturn.
3421 struct AAWillReturn
3422     : public IRAttribute<Attribute::WillReturn,
3423                          StateWrapper<BooleanState, AbstractAttribute>> {
3424   AAWillReturn(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3425 
3426   /// Return true if "willreturn" is assumed.
3427   bool isAssumedWillReturn() const { return getAssumed(); }
3428 
3429   /// Return true if "willreturn" is known.
3430   bool isKnownWillReturn() const { return getKnown(); }
3431 
3432   /// Create an abstract attribute view for the position \p IRP.
3433   static AAWillReturn &createForPosition(const IRPosition &IRP, Attributor &A);
3434 
3435   /// See AbstractAttribute::getName()
3436   const std::string getName() const override { return "AAWillReturn"; }
3437 
3438   /// See AbstractAttribute::getIdAddr()
3439   const char *getIdAddr() const override { return &ID; }
3440 
3441   /// This function should return true if the type of the \p AA is AAWillReturn
3442   static bool classof(const AbstractAttribute *AA) {
3443     return (AA->getIdAddr() == &ID);
3444   }
3445 
3446   /// Unique ID (due to the unique address)
3447   static const char ID;
3448 };
3449 
3450 /// An abstract attribute for undefined behavior.
3451 struct AAUndefinedBehavior
3452     : public StateWrapper<BooleanState, AbstractAttribute> {
3453   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3454   AAUndefinedBehavior(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3455 
3456   /// Return true if "undefined behavior" is assumed.
3457   bool isAssumedToCauseUB() const { return getAssumed(); }
3458 
3459   /// Return true if "undefined behavior" is assumed for a specific instruction.
3460   virtual bool isAssumedToCauseUB(Instruction *I) const = 0;
3461 
3462   /// Return true if "undefined behavior" is known.
3463   bool isKnownToCauseUB() const { return getKnown(); }
3464 
3465   /// Return true if "undefined behavior" is known for a specific instruction.
3466   virtual bool isKnownToCauseUB(Instruction *I) const = 0;
3467 
3468   /// Create an abstract attribute view for the position \p IRP.
3469   static AAUndefinedBehavior &createForPosition(const IRPosition &IRP,
3470                                                 Attributor &A);
3471 
3472   /// See AbstractAttribute::getName()
3473   const std::string getName() const override { return "AAUndefinedBehavior"; }
3474 
3475   /// See AbstractAttribute::getIdAddr()
3476   const char *getIdAddr() const override { return &ID; }
3477 
3478   /// This function should return true if the type of the \p AA is
3479   /// AAUndefineBehavior
3480   static bool classof(const AbstractAttribute *AA) {
3481     return (AA->getIdAddr() == &ID);
3482   }
3483 
3484   /// Unique ID (due to the unique address)
3485   static const char ID;
3486 };
3487 
3488 /// An abstract interface to determine reachability of point A to B.
3489 struct AAIntraFnReachability
3490     : public StateWrapper<BooleanState, AbstractAttribute> {
3491   using Base = StateWrapper<BooleanState, AbstractAttribute>;
3492   AAIntraFnReachability(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3493 
3494   /// Returns true if 'From' instruction is assumed to reach, 'To' instruction.
3495   /// Users should provide two positions they are interested in, and the class
3496   /// determines (and caches) reachability.
3497   virtual bool isAssumedReachable(
3498       Attributor &A, const Instruction &From, const Instruction &To,
3499       const AA::InstExclusionSetTy *ExclusionSet = nullptr) const = 0;
3500 
3501   /// Create an abstract attribute view for the position \p IRP.
3502   static AAIntraFnReachability &createForPosition(const IRPosition &IRP,
3503                                                   Attributor &A);
3504 
3505   /// See AbstractAttribute::getName()
3506   const std::string getName() const override { return "AAIntraFnReachability"; }
3507 
3508   /// See AbstractAttribute::getIdAddr()
3509   const char *getIdAddr() const override { return &ID; }
3510 
3511   /// This function should return true if the type of the \p AA is
3512   /// AAIntraFnReachability
3513   static bool classof(const AbstractAttribute *AA) {
3514     return (AA->getIdAddr() == &ID);
3515   }
3516 
3517   /// Unique ID (due to the unique address)
3518   static const char ID;
3519 };
3520 
3521 /// An abstract interface for all noalias attributes.
3522 struct AANoAlias
3523     : public IRAttribute<Attribute::NoAlias,
3524                          StateWrapper<BooleanState, AbstractAttribute>> {
3525   AANoAlias(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3526 
3527   /// Return true if we assume that the underlying value is alias.
3528   bool isAssumedNoAlias() const { return getAssumed(); }
3529 
3530   /// Return true if we know that underlying value is noalias.
3531   bool isKnownNoAlias() const { return getKnown(); }
3532 
3533   /// Create an abstract attribute view for the position \p IRP.
3534   static AANoAlias &createForPosition(const IRPosition &IRP, Attributor &A);
3535 
3536   /// See AbstractAttribute::getName()
3537   const std::string getName() const override { return "AANoAlias"; }
3538 
3539   /// See AbstractAttribute::getIdAddr()
3540   const char *getIdAddr() const override { return &ID; }
3541 
3542   /// This function should return true if the type of the \p AA is AANoAlias
3543   static bool classof(const AbstractAttribute *AA) {
3544     return (AA->getIdAddr() == &ID);
3545   }
3546 
3547   /// Unique ID (due to the unique address)
3548   static const char ID;
3549 };
3550 
3551 /// An AbstractAttribute for nofree.
3552 struct AANoFree
3553     : public IRAttribute<Attribute::NoFree,
3554                          StateWrapper<BooleanState, AbstractAttribute>> {
3555   AANoFree(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3556 
3557   /// Return true if "nofree" is assumed.
3558   bool isAssumedNoFree() const { return getAssumed(); }
3559 
3560   /// Return true if "nofree" is known.
3561   bool isKnownNoFree() const { return getKnown(); }
3562 
3563   /// Create an abstract attribute view for the position \p IRP.
3564   static AANoFree &createForPosition(const IRPosition &IRP, Attributor &A);
3565 
3566   /// See AbstractAttribute::getName()
3567   const std::string getName() const override { return "AANoFree"; }
3568 
3569   /// See AbstractAttribute::getIdAddr()
3570   const char *getIdAddr() const override { return &ID; }
3571 
3572   /// This function should return true if the type of the \p AA is AANoFree
3573   static bool classof(const AbstractAttribute *AA) {
3574     return (AA->getIdAddr() == &ID);
3575   }
3576 
3577   /// Unique ID (due to the unique address)
3578   static const char ID;
3579 };
3580 
3581 /// An AbstractAttribute for noreturn.
3582 struct AANoReturn
3583     : public IRAttribute<Attribute::NoReturn,
3584                          StateWrapper<BooleanState, AbstractAttribute>> {
3585   AANoReturn(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3586 
3587   /// Return true if the underlying object is assumed to never return.
3588   bool isAssumedNoReturn() const { return getAssumed(); }
3589 
3590   /// Return true if the underlying object is known to never return.
3591   bool isKnownNoReturn() const { return getKnown(); }
3592 
3593   /// Create an abstract attribute view for the position \p IRP.
3594   static AANoReturn &createForPosition(const IRPosition &IRP, Attributor &A);
3595 
3596   /// See AbstractAttribute::getName()
3597   const std::string getName() const override { return "AANoReturn"; }
3598 
3599   /// See AbstractAttribute::getIdAddr()
3600   const char *getIdAddr() const override { return &ID; }
3601 
3602   /// This function should return true if the type of the \p AA is AANoReturn
3603   static bool classof(const AbstractAttribute *AA) {
3604     return (AA->getIdAddr() == &ID);
3605   }
3606 
3607   /// Unique ID (due to the unique address)
3608   static const char ID;
3609 };
3610 
3611 /// An abstract interface for liveness abstract attribute.
3612 struct AAIsDead
3613     : public StateWrapper<BitIntegerState<uint8_t, 3, 0>, AbstractAttribute> {
3614   using Base = StateWrapper<BitIntegerState<uint8_t, 3, 0>, AbstractAttribute>;
3615   AAIsDead(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
3616 
3617   /// State encoding bits. A set bit in the state means the property holds.
3618   enum {
3619     HAS_NO_EFFECT = 1 << 0,
3620     IS_REMOVABLE = 1 << 1,
3621 
3622     IS_DEAD = HAS_NO_EFFECT | IS_REMOVABLE,
3623   };
3624   static_assert(IS_DEAD == getBestState(), "Unexpected BEST_STATE value");
3625 
3626 protected:
3627   /// The query functions are protected such that other attributes need to go
3628   /// through the Attributor interfaces: `Attributor::isAssumedDead(...)`
3629 
3630   /// Returns true if the underlying value is assumed dead.
3631   virtual bool isAssumedDead() const = 0;
3632 
3633   /// Returns true if the underlying value is known dead.
3634   virtual bool isKnownDead() const = 0;
3635 
3636   /// Returns true if \p BB is known dead.
3637   virtual bool isKnownDead(const BasicBlock *BB) const = 0;
3638 
3639   /// Returns true if \p I is assumed dead.
3640   virtual bool isAssumedDead(const Instruction *I) const = 0;
3641 
3642   /// Returns true if \p I is known dead.
3643   virtual bool isKnownDead(const Instruction *I) const = 0;
3644 
3645   /// Return true if the underlying value is a store that is known to be
3646   /// removable. This is different from dead stores as the removable store
3647   /// can have an effect on live values, especially loads, but that effect
3648   /// is propagated which allows us to remove the store in turn.
3649   virtual bool isRemovableStore() const { return false; }
3650 
3651   /// This method is used to check if at least one instruction in a collection
3652   /// of instructions is live.
3653   template <typename T> bool isLiveInstSet(T begin, T end) const {
3654     for (const auto &I : llvm::make_range(begin, end)) {
3655       assert(I->getFunction() == getIRPosition().getAssociatedFunction() &&
3656              "Instruction must be in the same anchor scope function.");
3657 
3658       if (!isAssumedDead(I))
3659         return true;
3660     }
3661 
3662     return false;
3663   }
3664 
3665 public:
3666   /// Create an abstract attribute view for the position \p IRP.
3667   static AAIsDead &createForPosition(const IRPosition &IRP, Attributor &A);
3668 
3669   /// Determine if \p F might catch asynchronous exceptions.
3670   static bool mayCatchAsynchronousExceptions(const Function &F) {
3671     return F.hasPersonalityFn() && !canSimplifyInvokeNoUnwind(&F);
3672   }
3673 
3674   /// Returns true if \p BB is assumed dead.
3675   virtual bool isAssumedDead(const BasicBlock *BB) const = 0;
3676 
3677   /// Return if the edge from \p From BB to \p To BB is assumed dead.
3678   /// This is specifically useful in AAReachability.
3679   virtual bool isEdgeDead(const BasicBlock *From, const BasicBlock *To) const {
3680     return false;
3681   }
3682 
3683   /// See AbstractAttribute::getName()
3684   const std::string getName() const override { return "AAIsDead"; }
3685 
3686   /// See AbstractAttribute::getIdAddr()
3687   const char *getIdAddr() const override { return &ID; }
3688 
3689   /// This function should return true if the type of the \p AA is AAIsDead
3690   static bool classof(const AbstractAttribute *AA) {
3691     return (AA->getIdAddr() == &ID);
3692   }
3693 
3694   /// Unique ID (due to the unique address)
3695   static const char ID;
3696 
3697   friend struct Attributor;
3698 };
3699 
3700 /// State for dereferenceable attribute
3701 struct DerefState : AbstractState {
3702 
3703   static DerefState getBestState() { return DerefState(); }
3704   static DerefState getBestState(const DerefState &) { return getBestState(); }
3705 
3706   /// Return the worst possible representable state.
3707   static DerefState getWorstState() {
3708     DerefState DS;
3709     DS.indicatePessimisticFixpoint();
3710     return DS;
3711   }
3712   static DerefState getWorstState(const DerefState &) {
3713     return getWorstState();
3714   }
3715 
3716   /// State representing for dereferenceable bytes.
3717   IncIntegerState<> DerefBytesState;
3718 
3719   /// Map representing for accessed memory offsets and sizes.
3720   /// A key is Offset and a value is size.
3721   /// If there is a load/store instruction something like,
3722   ///   p[offset] = v;
3723   /// (offset, sizeof(v)) will be inserted to this map.
3724   /// std::map is used because we want to iterate keys in ascending order.
3725   std::map<int64_t, uint64_t> AccessedBytesMap;
3726 
3727   /// Helper function to calculate dereferenceable bytes from current known
3728   /// bytes and accessed bytes.
3729   ///
3730   /// int f(int *A){
3731   ///    *A = 0;
3732   ///    *(A+2) = 2;
3733   ///    *(A+1) = 1;
3734   ///    *(A+10) = 10;
3735   /// }
3736   /// ```
3737   /// In that case, AccessedBytesMap is `{0:4, 4:4, 8:4, 40:4}`.
3738   /// AccessedBytesMap is std::map so it is iterated in accending order on
3739   /// key(Offset). So KnownBytes will be updated like this:
3740   ///
3741   /// |Access | KnownBytes
3742   /// |(0, 4)| 0 -> 4
3743   /// |(4, 4)| 4 -> 8
3744   /// |(8, 4)| 8 -> 12
3745   /// |(40, 4) | 12 (break)
3746   void computeKnownDerefBytesFromAccessedMap() {
3747     int64_t KnownBytes = DerefBytesState.getKnown();
3748     for (auto &Access : AccessedBytesMap) {
3749       if (KnownBytes < Access.first)
3750         break;
3751       KnownBytes = std::max(KnownBytes, Access.first + (int64_t)Access.second);
3752     }
3753 
3754     DerefBytesState.takeKnownMaximum(KnownBytes);
3755   }
3756 
3757   /// State representing that whether the value is globaly dereferenceable.
3758   BooleanState GlobalState;
3759 
3760   /// See AbstractState::isValidState()
3761   bool isValidState() const override { return DerefBytesState.isValidState(); }
3762 
3763   /// See AbstractState::isAtFixpoint()
3764   bool isAtFixpoint() const override {
3765     return !isValidState() ||
3766            (DerefBytesState.isAtFixpoint() && GlobalState.isAtFixpoint());
3767   }
3768 
3769   /// See AbstractState::indicateOptimisticFixpoint(...)
3770   ChangeStatus indicateOptimisticFixpoint() override {
3771     DerefBytesState.indicateOptimisticFixpoint();
3772     GlobalState.indicateOptimisticFixpoint();
3773     return ChangeStatus::UNCHANGED;
3774   }
3775 
3776   /// See AbstractState::indicatePessimisticFixpoint(...)
3777   ChangeStatus indicatePessimisticFixpoint() override {
3778     DerefBytesState.indicatePessimisticFixpoint();
3779     GlobalState.indicatePessimisticFixpoint();
3780     return ChangeStatus::CHANGED;
3781   }
3782 
3783   /// Update known dereferenceable bytes.
3784   void takeKnownDerefBytesMaximum(uint64_t Bytes) {
3785     DerefBytesState.takeKnownMaximum(Bytes);
3786 
3787     // Known bytes might increase.
3788     computeKnownDerefBytesFromAccessedMap();
3789   }
3790 
3791   /// Update assumed dereferenceable bytes.
3792   void takeAssumedDerefBytesMinimum(uint64_t Bytes) {
3793     DerefBytesState.takeAssumedMinimum(Bytes);
3794   }
3795 
3796   /// Add accessed bytes to the map.
3797   void addAccessedBytes(int64_t Offset, uint64_t Size) {
3798     uint64_t &AccessedBytes = AccessedBytesMap[Offset];
3799     AccessedBytes = std::max(AccessedBytes, Size);
3800 
3801     // Known bytes might increase.
3802     computeKnownDerefBytesFromAccessedMap();
3803   }
3804 
3805   /// Equality for DerefState.
3806   bool operator==(const DerefState &R) const {
3807     return this->DerefBytesState == R.DerefBytesState &&
3808            this->GlobalState == R.GlobalState;
3809   }
3810 
3811   /// Inequality for DerefState.
3812   bool operator!=(const DerefState &R) const { return !(*this == R); }
3813 
3814   /// See IntegerStateBase::operator^=
3815   DerefState operator^=(const DerefState &R) {
3816     DerefBytesState ^= R.DerefBytesState;
3817     GlobalState ^= R.GlobalState;
3818     return *this;
3819   }
3820 
3821   /// See IntegerStateBase::operator+=
3822   DerefState operator+=(const DerefState &R) {
3823     DerefBytesState += R.DerefBytesState;
3824     GlobalState += R.GlobalState;
3825     return *this;
3826   }
3827 
3828   /// See IntegerStateBase::operator&=
3829   DerefState operator&=(const DerefState &R) {
3830     DerefBytesState &= R.DerefBytesState;
3831     GlobalState &= R.GlobalState;
3832     return *this;
3833   }
3834 
3835   /// See IntegerStateBase::operator|=
3836   DerefState operator|=(const DerefState &R) {
3837     DerefBytesState |= R.DerefBytesState;
3838     GlobalState |= R.GlobalState;
3839     return *this;
3840   }
3841 
3842 protected:
3843   const AANonNull *NonNullAA = nullptr;
3844 };
3845 
3846 /// An abstract interface for all dereferenceable attribute.
3847 struct AADereferenceable
3848     : public IRAttribute<Attribute::Dereferenceable,
3849                          StateWrapper<DerefState, AbstractAttribute>> {
3850   AADereferenceable(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3851 
3852   /// Return true if we assume that the underlying value is nonnull.
3853   bool isAssumedNonNull() const {
3854     return NonNullAA && NonNullAA->isAssumedNonNull();
3855   }
3856 
3857   /// Return true if we know that the underlying value is nonnull.
3858   bool isKnownNonNull() const {
3859     return NonNullAA && NonNullAA->isKnownNonNull();
3860   }
3861 
3862   /// Return true if we assume that underlying value is
3863   /// dereferenceable(_or_null) globally.
3864   bool isAssumedGlobal() const { return GlobalState.getAssumed(); }
3865 
3866   /// Return true if we know that underlying value is
3867   /// dereferenceable(_or_null) globally.
3868   bool isKnownGlobal() const { return GlobalState.getKnown(); }
3869 
3870   /// Return assumed dereferenceable bytes.
3871   uint32_t getAssumedDereferenceableBytes() const {
3872     return DerefBytesState.getAssumed();
3873   }
3874 
3875   /// Return known dereferenceable bytes.
3876   uint32_t getKnownDereferenceableBytes() const {
3877     return DerefBytesState.getKnown();
3878   }
3879 
3880   /// Create an abstract attribute view for the position \p IRP.
3881   static AADereferenceable &createForPosition(const IRPosition &IRP,
3882                                               Attributor &A);
3883 
3884   /// See AbstractAttribute::getName()
3885   const std::string getName() const override { return "AADereferenceable"; }
3886 
3887   /// See AbstractAttribute::getIdAddr()
3888   const char *getIdAddr() const override { return &ID; }
3889 
3890   /// This function should return true if the type of the \p AA is
3891   /// AADereferenceable
3892   static bool classof(const AbstractAttribute *AA) {
3893     return (AA->getIdAddr() == &ID);
3894   }
3895 
3896   /// Unique ID (due to the unique address)
3897   static const char ID;
3898 };
3899 
3900 using AAAlignmentStateType =
3901     IncIntegerState<uint64_t, Value::MaximumAlignment, 1>;
3902 /// An abstract interface for all align attributes.
3903 struct AAAlign : public IRAttribute<
3904                      Attribute::Alignment,
3905                      StateWrapper<AAAlignmentStateType, AbstractAttribute>> {
3906   AAAlign(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3907 
3908   /// Return assumed alignment.
3909   Align getAssumedAlign() const { return Align(getAssumed()); }
3910 
3911   /// Return known alignment.
3912   Align getKnownAlign() const { return Align(getKnown()); }
3913 
3914   /// See AbstractAttribute::getName()
3915   const std::string getName() const override { return "AAAlign"; }
3916 
3917   /// See AbstractAttribute::getIdAddr()
3918   const char *getIdAddr() const override { return &ID; }
3919 
3920   /// This function should return true if the type of the \p AA is AAAlign
3921   static bool classof(const AbstractAttribute *AA) {
3922     return (AA->getIdAddr() == &ID);
3923   }
3924 
3925   /// Create an abstract attribute view for the position \p IRP.
3926   static AAAlign &createForPosition(const IRPosition &IRP, Attributor &A);
3927 
3928   /// Unique ID (due to the unique address)
3929   static const char ID;
3930 };
3931 
3932 /// An abstract interface to track if a value leaves it's defining function
3933 /// instance.
3934 /// TODO: We should make it a ternary AA tracking uniqueness, and uniqueness
3935 /// wrt. the Attributor analysis separately.
3936 struct AAInstanceInfo : public StateWrapper<BooleanState, AbstractAttribute> {
3937   AAInstanceInfo(const IRPosition &IRP, Attributor &A)
3938       : StateWrapper<BooleanState, AbstractAttribute>(IRP) {}
3939 
3940   /// Return true if we know that the underlying value is unique in its scope
3941   /// wrt. the Attributor analysis. That means it might not be unique but we can
3942   /// still use pointer equality without risking to represent two instances with
3943   /// one `llvm::Value`.
3944   bool isKnownUniqueForAnalysis() const { return isKnown(); }
3945 
3946   /// Return true if we assume that the underlying value is unique in its scope
3947   /// wrt. the Attributor analysis. That means it might not be unique but we can
3948   /// still use pointer equality without risking to represent two instances with
3949   /// one `llvm::Value`.
3950   bool isAssumedUniqueForAnalysis() const { return isAssumed(); }
3951 
3952   /// Create an abstract attribute view for the position \p IRP.
3953   static AAInstanceInfo &createForPosition(const IRPosition &IRP,
3954                                            Attributor &A);
3955 
3956   /// See AbstractAttribute::getName()
3957   const std::string getName() const override { return "AAInstanceInfo"; }
3958 
3959   /// See AbstractAttribute::getIdAddr()
3960   const char *getIdAddr() const override { return &ID; }
3961 
3962   /// This function should return true if the type of the \p AA is
3963   /// AAInstanceInfo
3964   static bool classof(const AbstractAttribute *AA) {
3965     return (AA->getIdAddr() == &ID);
3966   }
3967 
3968   /// Unique ID (due to the unique address)
3969   static const char ID;
3970 };
3971 
3972 /// An abstract interface for all nocapture attributes.
3973 struct AANoCapture
3974     : public IRAttribute<
3975           Attribute::NoCapture,
3976           StateWrapper<BitIntegerState<uint16_t, 7, 0>, AbstractAttribute>> {
3977   AANoCapture(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
3978 
3979   /// State encoding bits. A set bit in the state means the property holds.
3980   /// NO_CAPTURE is the best possible state, 0 the worst possible state.
3981   enum {
3982     NOT_CAPTURED_IN_MEM = 1 << 0,
3983     NOT_CAPTURED_IN_INT = 1 << 1,
3984     NOT_CAPTURED_IN_RET = 1 << 2,
3985 
3986     /// If we do not capture the value in memory or through integers we can only
3987     /// communicate it back as a derived pointer.
3988     NO_CAPTURE_MAYBE_RETURNED = NOT_CAPTURED_IN_MEM | NOT_CAPTURED_IN_INT,
3989 
3990     /// If we do not capture the value in memory, through integers, or as a
3991     /// derived pointer we know it is not captured.
3992     NO_CAPTURE =
3993         NOT_CAPTURED_IN_MEM | NOT_CAPTURED_IN_INT | NOT_CAPTURED_IN_RET,
3994   };
3995 
3996   /// Return true if we know that the underlying value is not captured in its
3997   /// respective scope.
3998   bool isKnownNoCapture() const { return isKnown(NO_CAPTURE); }
3999 
4000   /// Return true if we assume that the underlying value is not captured in its
4001   /// respective scope.
4002   bool isAssumedNoCapture() const { return isAssumed(NO_CAPTURE); }
4003 
4004   /// Return true if we know that the underlying value is not captured in its
4005   /// respective scope but we allow it to escape through a "return".
4006   bool isKnownNoCaptureMaybeReturned() const {
4007     return isKnown(NO_CAPTURE_MAYBE_RETURNED);
4008   }
4009 
4010   /// Return true if we assume that the underlying value is not captured in its
4011   /// respective scope but we allow it to escape through a "return".
4012   bool isAssumedNoCaptureMaybeReturned() const {
4013     return isAssumed(NO_CAPTURE_MAYBE_RETURNED);
4014   }
4015 
4016   /// Create an abstract attribute view for the position \p IRP.
4017   static AANoCapture &createForPosition(const IRPosition &IRP, Attributor &A);
4018 
4019   /// See AbstractAttribute::getName()
4020   const std::string getName() const override { return "AANoCapture"; }
4021 
4022   /// See AbstractAttribute::getIdAddr()
4023   const char *getIdAddr() const override { return &ID; }
4024 
4025   /// This function should return true if the type of the \p AA is AANoCapture
4026   static bool classof(const AbstractAttribute *AA) {
4027     return (AA->getIdAddr() == &ID);
4028   }
4029 
4030   /// Unique ID (due to the unique address)
4031   static const char ID;
4032 };
4033 
4034 struct ValueSimplifyStateType : public AbstractState {
4035 
4036   ValueSimplifyStateType(Type *Ty) : Ty(Ty) {}
4037 
4038   static ValueSimplifyStateType getBestState(Type *Ty) {
4039     return ValueSimplifyStateType(Ty);
4040   }
4041   static ValueSimplifyStateType getBestState(const ValueSimplifyStateType &VS) {
4042     return getBestState(VS.Ty);
4043   }
4044 
4045   /// Return the worst possible representable state.
4046   static ValueSimplifyStateType getWorstState(Type *Ty) {
4047     ValueSimplifyStateType DS(Ty);
4048     DS.indicatePessimisticFixpoint();
4049     return DS;
4050   }
4051   static ValueSimplifyStateType
4052   getWorstState(const ValueSimplifyStateType &VS) {
4053     return getWorstState(VS.Ty);
4054   }
4055 
4056   /// See AbstractState::isValidState(...)
4057   bool isValidState() const override { return BS.isValidState(); }
4058 
4059   /// See AbstractState::isAtFixpoint(...)
4060   bool isAtFixpoint() const override { return BS.isAtFixpoint(); }
4061 
4062   /// Return the assumed state encoding.
4063   ValueSimplifyStateType getAssumed() { return *this; }
4064   const ValueSimplifyStateType &getAssumed() const { return *this; }
4065 
4066   /// See AbstractState::indicatePessimisticFixpoint(...)
4067   ChangeStatus indicatePessimisticFixpoint() override {
4068     return BS.indicatePessimisticFixpoint();
4069   }
4070 
4071   /// See AbstractState::indicateOptimisticFixpoint(...)
4072   ChangeStatus indicateOptimisticFixpoint() override {
4073     return BS.indicateOptimisticFixpoint();
4074   }
4075 
4076   /// "Clamp" this state with \p PVS.
4077   ValueSimplifyStateType operator^=(const ValueSimplifyStateType &VS) {
4078     BS ^= VS.BS;
4079     unionAssumed(VS.SimplifiedAssociatedValue);
4080     return *this;
4081   }
4082 
4083   bool operator==(const ValueSimplifyStateType &RHS) const {
4084     if (isValidState() != RHS.isValidState())
4085       return false;
4086     if (!isValidState() && !RHS.isValidState())
4087       return true;
4088     return SimplifiedAssociatedValue == RHS.SimplifiedAssociatedValue;
4089   }
4090 
4091 protected:
4092   /// The type of the original value.
4093   Type *Ty;
4094 
4095   /// Merge \p Other into the currently assumed simplified value
4096   bool unionAssumed(std::optional<Value *> Other);
4097 
4098   /// Helper to track validity and fixpoint
4099   BooleanState BS;
4100 
4101   /// An assumed simplified value. Initially, it is set to std::nullopt, which
4102   /// means that the value is not clear under current assumption. If in the
4103   /// pessimistic state, getAssumedSimplifiedValue doesn't return this value but
4104   /// returns orignal associated value.
4105   std::optional<Value *> SimplifiedAssociatedValue;
4106 };
4107 
4108 /// An abstract interface for value simplify abstract attribute.
4109 struct AAValueSimplify
4110     : public StateWrapper<ValueSimplifyStateType, AbstractAttribute, Type *> {
4111   using Base = StateWrapper<ValueSimplifyStateType, AbstractAttribute, Type *>;
4112   AAValueSimplify(const IRPosition &IRP, Attributor &A)
4113       : Base(IRP, IRP.getAssociatedType()) {}
4114 
4115   /// Create an abstract attribute view for the position \p IRP.
4116   static AAValueSimplify &createForPosition(const IRPosition &IRP,
4117                                             Attributor &A);
4118 
4119   /// See AbstractAttribute::getName()
4120   const std::string getName() const override { return "AAValueSimplify"; }
4121 
4122   /// See AbstractAttribute::getIdAddr()
4123   const char *getIdAddr() const override { return &ID; }
4124 
4125   /// This function should return true if the type of the \p AA is
4126   /// AAValueSimplify
4127   static bool classof(const AbstractAttribute *AA) {
4128     return (AA->getIdAddr() == &ID);
4129   }
4130 
4131   /// Unique ID (due to the unique address)
4132   static const char ID;
4133 
4134 private:
4135   /// Return an assumed simplified value if a single candidate is found. If
4136   /// there cannot be one, return original value. If it is not clear yet, return
4137   /// std::nullopt.
4138   ///
4139   /// Use `Attributor::getAssumedSimplified` for value simplification.
4140   virtual std::optional<Value *>
4141   getAssumedSimplifiedValue(Attributor &A) const = 0;
4142 
4143   friend struct Attributor;
4144 };
4145 
4146 struct AAHeapToStack : public StateWrapper<BooleanState, AbstractAttribute> {
4147   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4148   AAHeapToStack(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4149 
4150   /// Returns true if HeapToStack conversion is assumed to be possible.
4151   virtual bool isAssumedHeapToStack(const CallBase &CB) const = 0;
4152 
4153   /// Returns true if HeapToStack conversion is assumed and the CB is a
4154   /// callsite to a free operation to be removed.
4155   virtual bool isAssumedHeapToStackRemovedFree(CallBase &CB) const = 0;
4156 
4157   /// Create an abstract attribute view for the position \p IRP.
4158   static AAHeapToStack &createForPosition(const IRPosition &IRP, Attributor &A);
4159 
4160   /// See AbstractAttribute::getName()
4161   const std::string getName() const override { return "AAHeapToStack"; }
4162 
4163   /// See AbstractAttribute::getIdAddr()
4164   const char *getIdAddr() const override { return &ID; }
4165 
4166   /// This function should return true if the type of the \p AA is AAHeapToStack
4167   static bool classof(const AbstractAttribute *AA) {
4168     return (AA->getIdAddr() == &ID);
4169   }
4170 
4171   /// Unique ID (due to the unique address)
4172   static const char ID;
4173 };
4174 
4175 /// An abstract interface for privatizability.
4176 ///
4177 /// A pointer is privatizable if it can be replaced by a new, private one.
4178 /// Privatizing pointer reduces the use count, interaction between unrelated
4179 /// code parts.
4180 ///
4181 /// In order for a pointer to be privatizable its value cannot be observed
4182 /// (=nocapture), it is (for now) not written (=readonly & noalias), we know
4183 /// what values are necessary to make the private copy look like the original
4184 /// one, and the values we need can be loaded (=dereferenceable).
4185 struct AAPrivatizablePtr
4186     : public StateWrapper<BooleanState, AbstractAttribute> {
4187   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4188   AAPrivatizablePtr(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4189 
4190   /// Returns true if pointer privatization is assumed to be possible.
4191   bool isAssumedPrivatizablePtr() const { return getAssumed(); }
4192 
4193   /// Returns true if pointer privatization is known to be possible.
4194   bool isKnownPrivatizablePtr() const { return getKnown(); }
4195 
4196   /// Return the type we can choose for a private copy of the underlying
4197   /// value. std::nullopt means it is not clear yet, nullptr means there is
4198   /// none.
4199   virtual std::optional<Type *> getPrivatizableType() const = 0;
4200 
4201   /// Create an abstract attribute view for the position \p IRP.
4202   static AAPrivatizablePtr &createForPosition(const IRPosition &IRP,
4203                                               Attributor &A);
4204 
4205   /// See AbstractAttribute::getName()
4206   const std::string getName() const override { return "AAPrivatizablePtr"; }
4207 
4208   /// See AbstractAttribute::getIdAddr()
4209   const char *getIdAddr() const override { return &ID; }
4210 
4211   /// This function should return true if the type of the \p AA is
4212   /// AAPricatizablePtr
4213   static bool classof(const AbstractAttribute *AA) {
4214     return (AA->getIdAddr() == &ID);
4215   }
4216 
4217   /// Unique ID (due to the unique address)
4218   static const char ID;
4219 };
4220 
4221 /// An abstract interface for memory access kind related attributes
4222 /// (readnone/readonly/writeonly).
4223 struct AAMemoryBehavior
4224     : public IRAttribute<
4225           Attribute::ReadNone,
4226           StateWrapper<BitIntegerState<uint8_t, 3>, AbstractAttribute>> {
4227   AAMemoryBehavior(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
4228 
4229   /// State encoding bits. A set bit in the state means the property holds.
4230   /// BEST_STATE is the best possible state, 0 the worst possible state.
4231   enum {
4232     NO_READS = 1 << 0,
4233     NO_WRITES = 1 << 1,
4234     NO_ACCESSES = NO_READS | NO_WRITES,
4235 
4236     BEST_STATE = NO_ACCESSES,
4237   };
4238   static_assert(BEST_STATE == getBestState(), "Unexpected BEST_STATE value");
4239 
4240   /// Return true if we know that the underlying value is not read or accessed
4241   /// in its respective scope.
4242   bool isKnownReadNone() const { return isKnown(NO_ACCESSES); }
4243 
4244   /// Return true if we assume that the underlying value is not read or accessed
4245   /// in its respective scope.
4246   bool isAssumedReadNone() const { return isAssumed(NO_ACCESSES); }
4247 
4248   /// Return true if we know that the underlying value is not accessed
4249   /// (=written) in its respective scope.
4250   bool isKnownReadOnly() const { return isKnown(NO_WRITES); }
4251 
4252   /// Return true if we assume that the underlying value is not accessed
4253   /// (=written) in its respective scope.
4254   bool isAssumedReadOnly() const { return isAssumed(NO_WRITES); }
4255 
4256   /// Return true if we know that the underlying value is not read in its
4257   /// respective scope.
4258   bool isKnownWriteOnly() const { return isKnown(NO_READS); }
4259 
4260   /// Return true if we assume that the underlying value is not read in its
4261   /// respective scope.
4262   bool isAssumedWriteOnly() const { return isAssumed(NO_READS); }
4263 
4264   /// Create an abstract attribute view for the position \p IRP.
4265   static AAMemoryBehavior &createForPosition(const IRPosition &IRP,
4266                                              Attributor &A);
4267 
4268   /// See AbstractAttribute::getName()
4269   const std::string getName() const override { return "AAMemoryBehavior"; }
4270 
4271   /// See AbstractAttribute::getIdAddr()
4272   const char *getIdAddr() const override { return &ID; }
4273 
4274   /// This function should return true if the type of the \p AA is
4275   /// AAMemoryBehavior
4276   static bool classof(const AbstractAttribute *AA) {
4277     return (AA->getIdAddr() == &ID);
4278   }
4279 
4280   /// Unique ID (due to the unique address)
4281   static const char ID;
4282 };
4283 
4284 /// An abstract interface for all memory location attributes
4285 /// (readnone/argmemonly/inaccessiblememonly/inaccessibleorargmemonly).
4286 struct AAMemoryLocation
4287     : public IRAttribute<
4288           Attribute::ReadNone,
4289           StateWrapper<BitIntegerState<uint32_t, 511>, AbstractAttribute>> {
4290   using MemoryLocationsKind = StateType::base_t;
4291 
4292   AAMemoryLocation(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
4293 
4294   /// Encoding of different locations that could be accessed by a memory
4295   /// access.
4296   enum {
4297     ALL_LOCATIONS = 0,
4298     NO_LOCAL_MEM = 1 << 0,
4299     NO_CONST_MEM = 1 << 1,
4300     NO_GLOBAL_INTERNAL_MEM = 1 << 2,
4301     NO_GLOBAL_EXTERNAL_MEM = 1 << 3,
4302     NO_GLOBAL_MEM = NO_GLOBAL_INTERNAL_MEM | NO_GLOBAL_EXTERNAL_MEM,
4303     NO_ARGUMENT_MEM = 1 << 4,
4304     NO_INACCESSIBLE_MEM = 1 << 5,
4305     NO_MALLOCED_MEM = 1 << 6,
4306     NO_UNKOWN_MEM = 1 << 7,
4307     NO_LOCATIONS = NO_LOCAL_MEM | NO_CONST_MEM | NO_GLOBAL_INTERNAL_MEM |
4308                    NO_GLOBAL_EXTERNAL_MEM | NO_ARGUMENT_MEM |
4309                    NO_INACCESSIBLE_MEM | NO_MALLOCED_MEM | NO_UNKOWN_MEM,
4310 
4311     // Helper bit to track if we gave up or not.
4312     VALID_STATE = NO_LOCATIONS + 1,
4313 
4314     BEST_STATE = NO_LOCATIONS | VALID_STATE,
4315   };
4316   static_assert(BEST_STATE == getBestState(), "Unexpected BEST_STATE value");
4317 
4318   /// Return true if we know that the associated functions has no observable
4319   /// accesses.
4320   bool isKnownReadNone() const { return isKnown(NO_LOCATIONS); }
4321 
4322   /// Return true if we assume that the associated functions has no observable
4323   /// accesses.
4324   bool isAssumedReadNone() const {
4325     return isAssumed(NO_LOCATIONS) || isAssumedStackOnly();
4326   }
4327 
4328   /// Return true if we know that the associated functions has at most
4329   /// local/stack accesses.
4330   bool isKnowStackOnly() const {
4331     return isKnown(inverseLocation(NO_LOCAL_MEM, true, true));
4332   }
4333 
4334   /// Return true if we assume that the associated functions has at most
4335   /// local/stack accesses.
4336   bool isAssumedStackOnly() const {
4337     return isAssumed(inverseLocation(NO_LOCAL_MEM, true, true));
4338   }
4339 
4340   /// Return true if we know that the underlying value will only access
4341   /// inaccesible memory only (see Attribute::InaccessibleMemOnly).
4342   bool isKnownInaccessibleMemOnly() const {
4343     return isKnown(inverseLocation(NO_INACCESSIBLE_MEM, true, true));
4344   }
4345 
4346   /// Return true if we assume that the underlying value will only access
4347   /// inaccesible memory only (see Attribute::InaccessibleMemOnly).
4348   bool isAssumedInaccessibleMemOnly() const {
4349     return isAssumed(inverseLocation(NO_INACCESSIBLE_MEM, true, true));
4350   }
4351 
4352   /// Return true if we know that the underlying value will only access
4353   /// argument pointees (see Attribute::ArgMemOnly).
4354   bool isKnownArgMemOnly() const {
4355     return isKnown(inverseLocation(NO_ARGUMENT_MEM, true, true));
4356   }
4357 
4358   /// Return true if we assume that the underlying value will only access
4359   /// argument pointees (see Attribute::ArgMemOnly).
4360   bool isAssumedArgMemOnly() const {
4361     return isAssumed(inverseLocation(NO_ARGUMENT_MEM, true, true));
4362   }
4363 
4364   /// Return true if we know that the underlying value will only access
4365   /// inaccesible memory or argument pointees (see
4366   /// Attribute::InaccessibleOrArgMemOnly).
4367   bool isKnownInaccessibleOrArgMemOnly() const {
4368     return isKnown(
4369         inverseLocation(NO_INACCESSIBLE_MEM | NO_ARGUMENT_MEM, true, true));
4370   }
4371 
4372   /// Return true if we assume that the underlying value will only access
4373   /// inaccesible memory or argument pointees (see
4374   /// Attribute::InaccessibleOrArgMemOnly).
4375   bool isAssumedInaccessibleOrArgMemOnly() const {
4376     return isAssumed(
4377         inverseLocation(NO_INACCESSIBLE_MEM | NO_ARGUMENT_MEM, true, true));
4378   }
4379 
4380   /// Return true if the underlying value may access memory through arguement
4381   /// pointers of the associated function, if any.
4382   bool mayAccessArgMem() const { return !isAssumed(NO_ARGUMENT_MEM); }
4383 
4384   /// Return true if only the memory locations specififed by \p MLK are assumed
4385   /// to be accessed by the associated function.
4386   bool isAssumedSpecifiedMemOnly(MemoryLocationsKind MLK) const {
4387     return isAssumed(MLK);
4388   }
4389 
4390   /// Return the locations that are assumed to be not accessed by the associated
4391   /// function, if any.
4392   MemoryLocationsKind getAssumedNotAccessedLocation() const {
4393     return getAssumed();
4394   }
4395 
4396   /// Return the inverse of location \p Loc, thus for NO_XXX the return
4397   /// describes ONLY_XXX. The flags \p AndLocalMem and \p AndConstMem determine
4398   /// if local (=stack) and constant memory are allowed as well. Most of the
4399   /// time we do want them to be included, e.g., argmemonly allows accesses via
4400   /// argument pointers or local or constant memory accesses.
4401   static MemoryLocationsKind
4402   inverseLocation(MemoryLocationsKind Loc, bool AndLocalMem, bool AndConstMem) {
4403     return NO_LOCATIONS & ~(Loc | (AndLocalMem ? NO_LOCAL_MEM : 0) |
4404                             (AndConstMem ? NO_CONST_MEM : 0));
4405   };
4406 
4407   /// Return the locations encoded by \p MLK as a readable string.
4408   static std::string getMemoryLocationsAsStr(MemoryLocationsKind MLK);
4409 
4410   /// Simple enum to distinguish read/write/read-write accesses.
4411   enum AccessKind {
4412     NONE = 0,
4413     READ = 1 << 0,
4414     WRITE = 1 << 1,
4415     READ_WRITE = READ | WRITE,
4416   };
4417 
4418   /// Check \p Pred on all accesses to the memory kinds specified by \p MLK.
4419   ///
4420   /// This method will evaluate \p Pred on all accesses (access instruction +
4421   /// underlying accessed memory pointer) and it will return true if \p Pred
4422   /// holds every time.
4423   virtual bool checkForAllAccessesToMemoryKind(
4424       function_ref<bool(const Instruction *, const Value *, AccessKind,
4425                         MemoryLocationsKind)>
4426           Pred,
4427       MemoryLocationsKind MLK) const = 0;
4428 
4429   /// Create an abstract attribute view for the position \p IRP.
4430   static AAMemoryLocation &createForPosition(const IRPosition &IRP,
4431                                              Attributor &A);
4432 
4433   /// See AbstractState::getAsStr().
4434   const std::string getAsStr() const override {
4435     return getMemoryLocationsAsStr(getAssumedNotAccessedLocation());
4436   }
4437 
4438   /// See AbstractAttribute::getName()
4439   const std::string getName() const override { return "AAMemoryLocation"; }
4440 
4441   /// See AbstractAttribute::getIdAddr()
4442   const char *getIdAddr() const override { return &ID; }
4443 
4444   /// This function should return true if the type of the \p AA is
4445   /// AAMemoryLocation
4446   static bool classof(const AbstractAttribute *AA) {
4447     return (AA->getIdAddr() == &ID);
4448   }
4449 
4450   /// Unique ID (due to the unique address)
4451   static const char ID;
4452 };
4453 
4454 /// An abstract interface for range value analysis.
4455 struct AAValueConstantRange
4456     : public StateWrapper<IntegerRangeState, AbstractAttribute, uint32_t> {
4457   using Base = StateWrapper<IntegerRangeState, AbstractAttribute, uint32_t>;
4458   AAValueConstantRange(const IRPosition &IRP, Attributor &A)
4459       : Base(IRP, IRP.getAssociatedType()->getIntegerBitWidth()) {}
4460 
4461   /// See AbstractAttribute::getState(...).
4462   IntegerRangeState &getState() override { return *this; }
4463   const IntegerRangeState &getState() const override { return *this; }
4464 
4465   /// Create an abstract attribute view for the position \p IRP.
4466   static AAValueConstantRange &createForPosition(const IRPosition &IRP,
4467                                                  Attributor &A);
4468 
4469   /// Return an assumed range for the associated value a program point \p CtxI.
4470   /// If \p I is nullptr, simply return an assumed range.
4471   virtual ConstantRange
4472   getAssumedConstantRange(Attributor &A,
4473                           const Instruction *CtxI = nullptr) const = 0;
4474 
4475   /// Return a known range for the associated value at a program point \p CtxI.
4476   /// If \p I is nullptr, simply return a known range.
4477   virtual ConstantRange
4478   getKnownConstantRange(Attributor &A,
4479                         const Instruction *CtxI = nullptr) const = 0;
4480 
4481   /// Return an assumed constant for the associated value a program point \p
4482   /// CtxI.
4483   std::optional<Constant *>
4484   getAssumedConstant(Attributor &A, const Instruction *CtxI = nullptr) const {
4485     ConstantRange RangeV = getAssumedConstantRange(A, CtxI);
4486     if (auto *C = RangeV.getSingleElement()) {
4487       Type *Ty = getAssociatedValue().getType();
4488       return cast_or_null<Constant>(
4489           AA::getWithType(*ConstantInt::get(Ty->getContext(), *C), *Ty));
4490     }
4491     if (RangeV.isEmptySet())
4492       return std::nullopt;
4493     return nullptr;
4494   }
4495 
4496   /// See AbstractAttribute::getName()
4497   const std::string getName() const override { return "AAValueConstantRange"; }
4498 
4499   /// See AbstractAttribute::getIdAddr()
4500   const char *getIdAddr() const override { return &ID; }
4501 
4502   /// This function should return true if the type of the \p AA is
4503   /// AAValueConstantRange
4504   static bool classof(const AbstractAttribute *AA) {
4505     return (AA->getIdAddr() == &ID);
4506   }
4507 
4508   /// Unique ID (due to the unique address)
4509   static const char ID;
4510 };
4511 
4512 /// A class for a set state.
4513 /// The assumed boolean state indicates whether the corresponding set is full
4514 /// set or not. If the assumed state is false, this is the worst state. The
4515 /// worst state (invalid state) of set of potential values is when the set
4516 /// contains every possible value (i.e. we cannot in any way limit the value
4517 /// that the target position can take). That never happens naturally, we only
4518 /// force it. As for the conditions under which we force it, see
4519 /// AAPotentialConstantValues.
4520 template <typename MemberTy> struct PotentialValuesState : AbstractState {
4521   using SetTy = SmallSetVector<MemberTy, 8>;
4522 
4523   PotentialValuesState() : IsValidState(true), UndefIsContained(false) {}
4524 
4525   PotentialValuesState(bool IsValid)
4526       : IsValidState(IsValid), UndefIsContained(false) {}
4527 
4528   /// See AbstractState::isValidState(...)
4529   bool isValidState() const override { return IsValidState.isValidState(); }
4530 
4531   /// See AbstractState::isAtFixpoint(...)
4532   bool isAtFixpoint() const override { return IsValidState.isAtFixpoint(); }
4533 
4534   /// See AbstractState::indicatePessimisticFixpoint(...)
4535   ChangeStatus indicatePessimisticFixpoint() override {
4536     return IsValidState.indicatePessimisticFixpoint();
4537   }
4538 
4539   /// See AbstractState::indicateOptimisticFixpoint(...)
4540   ChangeStatus indicateOptimisticFixpoint() override {
4541     return IsValidState.indicateOptimisticFixpoint();
4542   }
4543 
4544   /// Return the assumed state
4545   PotentialValuesState &getAssumed() { return *this; }
4546   const PotentialValuesState &getAssumed() const { return *this; }
4547 
4548   /// Return this set. We should check whether this set is valid or not by
4549   /// isValidState() before calling this function.
4550   const SetTy &getAssumedSet() const {
4551     assert(isValidState() && "This set shoud not be used when it is invalid!");
4552     return Set;
4553   }
4554 
4555   /// Returns whether this state contains an undef value or not.
4556   bool undefIsContained() const {
4557     assert(isValidState() && "This flag shoud not be used when it is invalid!");
4558     return UndefIsContained;
4559   }
4560 
4561   bool operator==(const PotentialValuesState &RHS) const {
4562     if (isValidState() != RHS.isValidState())
4563       return false;
4564     if (!isValidState() && !RHS.isValidState())
4565       return true;
4566     if (undefIsContained() != RHS.undefIsContained())
4567       return false;
4568     return Set == RHS.getAssumedSet();
4569   }
4570 
4571   /// Maximum number of potential values to be tracked.
4572   /// This is set by -attributor-max-potential-values command line option
4573   static unsigned MaxPotentialValues;
4574 
4575   /// Return empty set as the best state of potential values.
4576   static PotentialValuesState getBestState() {
4577     return PotentialValuesState(true);
4578   }
4579 
4580   static PotentialValuesState getBestState(const PotentialValuesState &PVS) {
4581     return getBestState();
4582   }
4583 
4584   /// Return full set as the worst state of potential values.
4585   static PotentialValuesState getWorstState() {
4586     return PotentialValuesState(false);
4587   }
4588 
4589   /// Union assumed set with the passed value.
4590   void unionAssumed(const MemberTy &C) { insert(C); }
4591 
4592   /// Union assumed set with assumed set of the passed state \p PVS.
4593   void unionAssumed(const PotentialValuesState &PVS) { unionWith(PVS); }
4594 
4595   /// Union assumed set with an undef value.
4596   void unionAssumedWithUndef() { unionWithUndef(); }
4597 
4598   /// "Clamp" this state with \p PVS.
4599   PotentialValuesState operator^=(const PotentialValuesState &PVS) {
4600     IsValidState ^= PVS.IsValidState;
4601     unionAssumed(PVS);
4602     return *this;
4603   }
4604 
4605   PotentialValuesState operator&=(const PotentialValuesState &PVS) {
4606     IsValidState &= PVS.IsValidState;
4607     unionAssumed(PVS);
4608     return *this;
4609   }
4610 
4611   bool contains(const MemberTy &V) const {
4612     return !isValidState() ? true : Set.contains(V);
4613   }
4614 
4615 protected:
4616   SetTy &getAssumedSet() {
4617     assert(isValidState() && "This set shoud not be used when it is invalid!");
4618     return Set;
4619   }
4620 
4621 private:
4622   /// Check the size of this set, and invalidate when the size is no
4623   /// less than \p MaxPotentialValues threshold.
4624   void checkAndInvalidate() {
4625     if (Set.size() >= MaxPotentialValues)
4626       indicatePessimisticFixpoint();
4627     else
4628       reduceUndefValue();
4629   }
4630 
4631   /// If this state contains both undef and not undef, we can reduce
4632   /// undef to the not undef value.
4633   void reduceUndefValue() { UndefIsContained = UndefIsContained & Set.empty(); }
4634 
4635   /// Insert an element into this set.
4636   void insert(const MemberTy &C) {
4637     if (!isValidState())
4638       return;
4639     Set.insert(C);
4640     checkAndInvalidate();
4641   }
4642 
4643   /// Take union with R.
4644   void unionWith(const PotentialValuesState &R) {
4645     /// If this is a full set, do nothing.
4646     if (!isValidState())
4647       return;
4648     /// If R is full set, change L to a full set.
4649     if (!R.isValidState()) {
4650       indicatePessimisticFixpoint();
4651       return;
4652     }
4653     for (const MemberTy &C : R.Set)
4654       Set.insert(C);
4655     UndefIsContained |= R.undefIsContained();
4656     checkAndInvalidate();
4657   }
4658 
4659   /// Take union with an undef value.
4660   void unionWithUndef() {
4661     UndefIsContained = true;
4662     reduceUndefValue();
4663   }
4664 
4665   /// Take intersection with R.
4666   void intersectWith(const PotentialValuesState &R) {
4667     /// If R is a full set, do nothing.
4668     if (!R.isValidState())
4669       return;
4670     /// If this is a full set, change this to R.
4671     if (!isValidState()) {
4672       *this = R;
4673       return;
4674     }
4675     SetTy IntersectSet;
4676     for (const MemberTy &C : Set) {
4677       if (R.Set.count(C))
4678         IntersectSet.insert(C);
4679     }
4680     Set = IntersectSet;
4681     UndefIsContained &= R.undefIsContained();
4682     reduceUndefValue();
4683   }
4684 
4685   /// A helper state which indicate whether this state is valid or not.
4686   BooleanState IsValidState;
4687 
4688   /// Container for potential values
4689   SetTy Set;
4690 
4691   /// Flag for undef value
4692   bool UndefIsContained;
4693 };
4694 
4695 using PotentialConstantIntValuesState = PotentialValuesState<APInt>;
4696 using PotentialLLVMValuesState =
4697     PotentialValuesState<std::pair<AA::ValueAndContext, AA::ValueScope>>;
4698 
4699 raw_ostream &operator<<(raw_ostream &OS,
4700                         const PotentialConstantIntValuesState &R);
4701 raw_ostream &operator<<(raw_ostream &OS, const PotentialLLVMValuesState &R);
4702 
4703 /// An abstract interface for potential values analysis.
4704 ///
4705 /// This AA collects potential values for each IR position.
4706 /// An assumed set of potential values is initialized with the empty set (the
4707 /// best state) and it will grow monotonically as we find more potential values
4708 /// for this position.
4709 /// The set might be forced to the worst state, that is, to contain every
4710 /// possible value for this position in 2 cases.
4711 ///   1. We surpassed the \p MaxPotentialValues threshold. This includes the
4712 ///      case that this position is affected (e.g. because of an operation) by a
4713 ///      Value that is in the worst state.
4714 ///   2. We tried to initialize on a Value that we cannot handle (e.g. an
4715 ///      operator we do not currently handle).
4716 ///
4717 /// For non constant integers see AAPotentialValues.
4718 struct AAPotentialConstantValues
4719     : public StateWrapper<PotentialConstantIntValuesState, AbstractAttribute> {
4720   using Base = StateWrapper<PotentialConstantIntValuesState, AbstractAttribute>;
4721   AAPotentialConstantValues(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4722 
4723   /// See AbstractAttribute::getState(...).
4724   PotentialConstantIntValuesState &getState() override { return *this; }
4725   const PotentialConstantIntValuesState &getState() const override {
4726     return *this;
4727   }
4728 
4729   /// Create an abstract attribute view for the position \p IRP.
4730   static AAPotentialConstantValues &createForPosition(const IRPosition &IRP,
4731                                                       Attributor &A);
4732 
4733   /// Return assumed constant for the associated value
4734   std::optional<Constant *>
4735   getAssumedConstant(Attributor &A, const Instruction *CtxI = nullptr) const {
4736     if (!isValidState())
4737       return nullptr;
4738     if (getAssumedSet().size() == 1) {
4739       Type *Ty = getAssociatedValue().getType();
4740       return cast_or_null<Constant>(AA::getWithType(
4741           *ConstantInt::get(Ty->getContext(), *(getAssumedSet().begin())),
4742           *Ty));
4743     }
4744     if (getAssumedSet().size() == 0) {
4745       if (undefIsContained())
4746         return UndefValue::get(getAssociatedValue().getType());
4747       return std::nullopt;
4748     }
4749 
4750     return nullptr;
4751   }
4752 
4753   /// See AbstractAttribute::getName()
4754   const std::string getName() const override {
4755     return "AAPotentialConstantValues";
4756   }
4757 
4758   /// See AbstractAttribute::getIdAddr()
4759   const char *getIdAddr() const override { return &ID; }
4760 
4761   /// This function should return true if the type of the \p AA is
4762   /// AAPotentialConstantValues
4763   static bool classof(const AbstractAttribute *AA) {
4764     return (AA->getIdAddr() == &ID);
4765   }
4766 
4767   /// Unique ID (due to the unique address)
4768   static const char ID;
4769 };
4770 
4771 struct AAPotentialValues
4772     : public StateWrapper<PotentialLLVMValuesState, AbstractAttribute> {
4773   using Base = StateWrapper<PotentialLLVMValuesState, AbstractAttribute>;
4774   AAPotentialValues(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
4775 
4776   /// See AbstractAttribute::getState(...).
4777   PotentialLLVMValuesState &getState() override { return *this; }
4778   const PotentialLLVMValuesState &getState() const override { return *this; }
4779 
4780   /// Create an abstract attribute view for the position \p IRP.
4781   static AAPotentialValues &createForPosition(const IRPosition &IRP,
4782                                               Attributor &A);
4783 
4784   /// Extract the single value in \p Values if any.
4785   static Value *getSingleValue(Attributor &A, const AbstractAttribute &AA,
4786                                const IRPosition &IRP,
4787                                SmallVectorImpl<AA::ValueAndContext> &Values);
4788 
4789   /// See AbstractAttribute::getName()
4790   const std::string getName() const override { return "AAPotentialValues"; }
4791 
4792   /// See AbstractAttribute::getIdAddr()
4793   const char *getIdAddr() const override { return &ID; }
4794 
4795   /// This function should return true if the type of the \p AA is
4796   /// AAPotentialValues
4797   static bool classof(const AbstractAttribute *AA) {
4798     return (AA->getIdAddr() == &ID);
4799   }
4800 
4801   /// Unique ID (due to the unique address)
4802   static const char ID;
4803 
4804 private:
4805   virtual bool
4806   getAssumedSimplifiedValues(Attributor &A,
4807                              SmallVectorImpl<AA::ValueAndContext> &Values,
4808                              AA::ValueScope) const = 0;
4809 
4810   friend struct Attributor;
4811 };
4812 
4813 /// An abstract interface for all noundef attributes.
4814 struct AANoUndef
4815     : public IRAttribute<Attribute::NoUndef,
4816                          StateWrapper<BooleanState, AbstractAttribute>> {
4817   AANoUndef(const IRPosition &IRP, Attributor &A) : IRAttribute(IRP) {}
4818 
4819   /// Return true if we assume that the underlying value is noundef.
4820   bool isAssumedNoUndef() const { return getAssumed(); }
4821 
4822   /// Return true if we know that underlying value is noundef.
4823   bool isKnownNoUndef() const { return getKnown(); }
4824 
4825   /// Create an abstract attribute view for the position \p IRP.
4826   static AANoUndef &createForPosition(const IRPosition &IRP, Attributor &A);
4827 
4828   /// See AbstractAttribute::getName()
4829   const std::string getName() const override { return "AANoUndef"; }
4830 
4831   /// See AbstractAttribute::getIdAddr()
4832   const char *getIdAddr() const override { return &ID; }
4833 
4834   /// This function should return true if the type of the \p AA is AANoUndef
4835   static bool classof(const AbstractAttribute *AA) {
4836     return (AA->getIdAddr() == &ID);
4837   }
4838 
4839   /// Unique ID (due to the unique address)
4840   static const char ID;
4841 };
4842 
4843 struct AACallGraphNode;
4844 struct AACallEdges;
4845 
4846 /// An Iterator for call edges, creates AACallEdges attributes in a lazy way.
4847 /// This iterator becomes invalid if the underlying edge list changes.
4848 /// So This shouldn't outlive a iteration of Attributor.
4849 class AACallEdgeIterator
4850     : public iterator_adaptor_base<AACallEdgeIterator,
4851                                    SetVector<Function *>::iterator> {
4852   AACallEdgeIterator(Attributor &A, SetVector<Function *>::iterator Begin)
4853       : iterator_adaptor_base(Begin), A(A) {}
4854 
4855 public:
4856   AACallGraphNode *operator*() const;
4857 
4858 private:
4859   Attributor &A;
4860   friend AACallEdges;
4861   friend AttributorCallGraph;
4862 };
4863 
4864 struct AACallGraphNode {
4865   AACallGraphNode(Attributor &A) : A(A) {}
4866   virtual ~AACallGraphNode() = default;
4867 
4868   virtual AACallEdgeIterator optimisticEdgesBegin() const = 0;
4869   virtual AACallEdgeIterator optimisticEdgesEnd() const = 0;
4870 
4871   /// Iterator range for exploring the call graph.
4872   iterator_range<AACallEdgeIterator> optimisticEdgesRange() const {
4873     return iterator_range<AACallEdgeIterator>(optimisticEdgesBegin(),
4874                                               optimisticEdgesEnd());
4875   }
4876 
4877 protected:
4878   /// Reference to Attributor needed for GraphTraits implementation.
4879   Attributor &A;
4880 };
4881 
4882 /// An abstract state for querying live call edges.
4883 /// This interface uses the Attributor's optimistic liveness
4884 /// information to compute the edges that are alive.
4885 struct AACallEdges : public StateWrapper<BooleanState, AbstractAttribute>,
4886                      AACallGraphNode {
4887   using Base = StateWrapper<BooleanState, AbstractAttribute>;
4888 
4889   AACallEdges(const IRPosition &IRP, Attributor &A)
4890       : Base(IRP), AACallGraphNode(A) {}
4891 
4892   /// Get the optimistic edges.
4893   virtual const SetVector<Function *> &getOptimisticEdges() const = 0;
4894 
4895   /// Is there any call with a unknown callee.
4896   virtual bool hasUnknownCallee() const = 0;
4897 
4898   /// Is there any call with a unknown callee, excluding any inline asm.
4899   virtual bool hasNonAsmUnknownCallee() const = 0;
4900 
4901   /// Iterator for exploring the call graph.
4902   AACallEdgeIterator optimisticEdgesBegin() const override {
4903     return AACallEdgeIterator(A, getOptimisticEdges().begin());
4904   }
4905 
4906   /// Iterator for exploring the call graph.
4907   AACallEdgeIterator optimisticEdgesEnd() const override {
4908     return AACallEdgeIterator(A, getOptimisticEdges().end());
4909   }
4910 
4911   /// Create an abstract attribute view for the position \p IRP.
4912   static AACallEdges &createForPosition(const IRPosition &IRP, Attributor &A);
4913 
4914   /// See AbstractAttribute::getName()
4915   const std::string getName() const override { return "AACallEdges"; }
4916 
4917   /// See AbstractAttribute::getIdAddr()
4918   const char *getIdAddr() const override { return &ID; }
4919 
4920   /// This function should return true if the type of the \p AA is AACallEdges.
4921   static bool classof(const AbstractAttribute *AA) {
4922     return (AA->getIdAddr() == &ID);
4923   }
4924 
4925   /// Unique ID (due to the unique address)
4926   static const char ID;
4927 };
4928 
4929 // Synthetic root node for the Attributor's internal call graph.
4930 struct AttributorCallGraph : public AACallGraphNode {
4931   AttributorCallGraph(Attributor &A) : AACallGraphNode(A) {}
4932   virtual ~AttributorCallGraph() = default;
4933 
4934   AACallEdgeIterator optimisticEdgesBegin() const override {
4935     return AACallEdgeIterator(A, A.Functions.begin());
4936   }
4937 
4938   AACallEdgeIterator optimisticEdgesEnd() const override {
4939     return AACallEdgeIterator(A, A.Functions.end());
4940   }
4941 
4942   /// Force populate the entire call graph.
4943   void populateAll() const {
4944     for (const AACallGraphNode *AA : optimisticEdgesRange()) {
4945       // Nothing else to do here.
4946       (void)AA;
4947     }
4948   }
4949 
4950   void print();
4951 };
4952 
4953 template <> struct GraphTraits<AACallGraphNode *> {
4954   using NodeRef = AACallGraphNode *;
4955   using ChildIteratorType = AACallEdgeIterator;
4956 
4957   static AACallEdgeIterator child_begin(AACallGraphNode *Node) {
4958     return Node->optimisticEdgesBegin();
4959   }
4960 
4961   static AACallEdgeIterator child_end(AACallGraphNode *Node) {
4962     return Node->optimisticEdgesEnd();
4963   }
4964 };
4965 
4966 template <>
4967 struct GraphTraits<AttributorCallGraph *>
4968     : public GraphTraits<AACallGraphNode *> {
4969   using nodes_iterator = AACallEdgeIterator;
4970 
4971   static AACallGraphNode *getEntryNode(AttributorCallGraph *G) {
4972     return static_cast<AACallGraphNode *>(G);
4973   }
4974 
4975   static AACallEdgeIterator nodes_begin(const AttributorCallGraph *G) {
4976     return G->optimisticEdgesBegin();
4977   }
4978 
4979   static AACallEdgeIterator nodes_end(const AttributorCallGraph *G) {
4980     return G->optimisticEdgesEnd();
4981   }
4982 };
4983 
4984 template <>
4985 struct DOTGraphTraits<AttributorCallGraph *> : public DefaultDOTGraphTraits {
4986   DOTGraphTraits(bool Simple = false) : DefaultDOTGraphTraits(Simple) {}
4987 
4988   std::string getNodeLabel(const AACallGraphNode *Node,
4989                            const AttributorCallGraph *Graph) {
4990     const AACallEdges *AACE = static_cast<const AACallEdges *>(Node);
4991     return AACE->getAssociatedFunction()->getName().str();
4992   }
4993 
4994   static bool isNodeHidden(const AACallGraphNode *Node,
4995                            const AttributorCallGraph *Graph) {
4996     // Hide the synth root.
4997     return static_cast<const AACallGraphNode *>(Graph) == Node;
4998   }
4999 };
5000 
5001 struct AAExecutionDomain
5002     : public StateWrapper<BooleanState, AbstractAttribute> {
5003   using Base = StateWrapper<BooleanState, AbstractAttribute>;
5004   AAExecutionDomain(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
5005 
5006   /// Summary about the execution domain of a block or instruction.
5007   struct ExecutionDomainTy {
5008     using BarriersSetTy = SmallPtrSet<CallBase *, 2>;
5009     using AssumesSetTy = SmallPtrSet<AssumeInst *, 4>;
5010 
5011     void addAssumeInst(Attributor &A, AssumeInst &AI) {
5012       EncounteredAssumes.insert(&AI);
5013     }
5014 
5015     void addAlignedBarrier(Attributor &A, CallBase &CB) {
5016       AlignedBarriers.insert(&CB);
5017     }
5018 
5019     void clearAssumeInstAndAlignedBarriers() {
5020       EncounteredAssumes.clear();
5021       AlignedBarriers.clear();
5022     }
5023 
5024     bool IsExecutedByInitialThreadOnly = true;
5025     bool IsReachedFromAlignedBarrierOnly = true;
5026     bool IsReachingAlignedBarrierOnly = true;
5027     bool EncounteredNonLocalSideEffect = false;
5028     BarriersSetTy AlignedBarriers;
5029     AssumesSetTy EncounteredAssumes;
5030   };
5031 
5032   /// Create an abstract attribute view for the position \p IRP.
5033   static AAExecutionDomain &createForPosition(const IRPosition &IRP,
5034                                               Attributor &A);
5035 
5036   /// See AbstractAttribute::getName().
5037   const std::string getName() const override { return "AAExecutionDomain"; }
5038 
5039   /// See AbstractAttribute::getIdAddr().
5040   const char *getIdAddr() const override { return &ID; }
5041 
5042   /// Check if an instruction is executed only by the initial thread.
5043   bool isExecutedByInitialThreadOnly(const Instruction &I) const {
5044     return isExecutedByInitialThreadOnly(*I.getParent());
5045   }
5046 
5047   /// Check if a basic block is executed only by the initial thread.
5048   virtual bool isExecutedByInitialThreadOnly(const BasicBlock &) const = 0;
5049 
5050   /// Check if the instruction \p I is executed in an aligned region, that is,
5051   /// the synchronizing effects before and after \p I are both aligned barriers.
5052   /// This effectively means all threads execute \p I together.
5053   virtual bool isExecutedInAlignedRegion(Attributor &A,
5054                                          const Instruction &I) const = 0;
5055 
5056   virtual ExecutionDomainTy getExecutionDomain(const BasicBlock &) const = 0;
5057   virtual ExecutionDomainTy getExecutionDomain(const CallBase &) const = 0;
5058   virtual ExecutionDomainTy getFunctionExecutionDomain() const = 0;
5059 
5060   /// This function should return true if the type of the \p AA is
5061   /// AAExecutionDomain.
5062   static bool classof(const AbstractAttribute *AA) {
5063     return (AA->getIdAddr() == &ID);
5064   }
5065 
5066   /// Unique ID (due to the unique address)
5067   static const char ID;
5068 };
5069 
5070 /// An abstract Attribute for computing reachability between functions.
5071 struct AAInterFnReachability
5072     : public StateWrapper<BooleanState, AbstractAttribute> {
5073   using Base = StateWrapper<BooleanState, AbstractAttribute>;
5074 
5075   AAInterFnReachability(const IRPosition &IRP, Attributor &A) : Base(IRP) {}
5076 
5077   /// If the function represented by this possition can reach \p Fn.
5078   bool canReach(Attributor &A, const Function &Fn) const {
5079     Function *Scope = getAnchorScope();
5080     if (!Scope || Scope->isDeclaration())
5081       return true;
5082     return instructionCanReach(A, Scope->getEntryBlock().front(), Fn);
5083   }
5084 
5085   /// Can  \p Inst reach \p Fn.
5086   /// See also AA::isPotentiallyReachable.
5087   virtual bool instructionCanReach(
5088       Attributor &A, const Instruction &Inst, const Function &Fn,
5089       const AA::InstExclusionSetTy *ExclusionSet = nullptr,
5090       SmallPtrSet<const Function *, 16> *Visited = nullptr) const = 0;
5091 
5092   /// Create an abstract attribute view for the position \p IRP.
5093   static AAInterFnReachability &createForPosition(const IRPosition &IRP,
5094                                                   Attributor &A);
5095 
5096   /// See AbstractAttribute::getName()
5097   const std::string getName() const override { return "AAInterFnReachability"; }
5098 
5099   /// See AbstractAttribute::getIdAddr()
5100   const char *getIdAddr() const override { return &ID; }
5101 
5102   /// This function should return true if the type of the \p AA is AACallEdges.
5103   static bool classof(const AbstractAttribute *AA) {
5104     return (AA->getIdAddr() == &ID);
5105   }
5106 
5107   /// Unique ID (due to the unique address)
5108   static const char ID;
5109 };
5110 
5111 /// An abstract interface for struct information.
5112 struct AAPointerInfo : public AbstractAttribute {
5113   AAPointerInfo(const IRPosition &IRP) : AbstractAttribute(IRP) {}
5114 
5115   enum AccessKind {
5116     // First two bits to distinguish may and must accesses.
5117     AK_MUST = 1 << 0,
5118     AK_MAY = 1 << 1,
5119 
5120     // Then two bits for read and write. These are not exclusive.
5121     AK_R = 1 << 2,
5122     AK_W = 1 << 3,
5123     AK_RW = AK_R | AK_W,
5124 
5125     // One special case for assumptions about memory content. These
5126     // are neither reads nor writes. They are however always modeled
5127     // as read to avoid using them for write removal.
5128     AK_ASSUMPTION = (1 << 4) | AK_MUST,
5129 
5130     // Helper for easy access.
5131     AK_MAY_READ = AK_MAY | AK_R,
5132     AK_MAY_WRITE = AK_MAY | AK_W,
5133     AK_MAY_READ_WRITE = AK_MAY | AK_R | AK_W,
5134     AK_MUST_READ = AK_MUST | AK_R,
5135     AK_MUST_WRITE = AK_MUST | AK_W,
5136     AK_MUST_READ_WRITE = AK_MUST | AK_R | AK_W,
5137   };
5138 
5139   /// A container for a list of ranges.
5140   struct RangeList {
5141     // The set of ranges rarely contains more than one element, and is unlikely
5142     // to contain more than say four elements. So we find the middle-ground with
5143     // a sorted vector. This avoids hard-coding a rarely used number like "four"
5144     // into every instance of a SmallSet.
5145     using RangeTy = AA::RangeTy;
5146     using VecTy = SmallVector<RangeTy>;
5147     using iterator = VecTy::iterator;
5148     using const_iterator = VecTy::const_iterator;
5149     VecTy Ranges;
5150 
5151     RangeList(const RangeTy &R) { Ranges.push_back(R); }
5152     RangeList(ArrayRef<int64_t> Offsets, int64_t Size) {
5153       Ranges.reserve(Offsets.size());
5154       for (unsigned i = 0, e = Offsets.size(); i != e; ++i) {
5155         assert(((i + 1 == e) || Offsets[i] < Offsets[i + 1]) &&
5156                "Expected strictly ascending offsets.");
5157         Ranges.emplace_back(Offsets[i], Size);
5158       }
5159     }
5160     RangeList() = default;
5161 
5162     iterator begin() { return Ranges.begin(); }
5163     iterator end() { return Ranges.end(); }
5164     const_iterator begin() const { return Ranges.begin(); }
5165     const_iterator end() const { return Ranges.end(); }
5166 
5167     // Helpers required for std::set_difference
5168     using value_type = RangeTy;
5169     void push_back(const RangeTy &R) {
5170       assert((Ranges.empty() || RangeTy::OffsetLessThan(Ranges.back(), R)) &&
5171              "Ensure the last element is the greatest.");
5172       Ranges.push_back(R);
5173     }
5174 
5175     /// Copy ranges from \p L that are not in \p R, into \p D.
5176     static void set_difference(const RangeList &L, const RangeList &R,
5177                                RangeList &D) {
5178       std::set_difference(L.begin(), L.end(), R.begin(), R.end(),
5179                           std::back_inserter(D), RangeTy::OffsetLessThan);
5180     }
5181 
5182     unsigned size() const { return Ranges.size(); }
5183 
5184     bool operator==(const RangeList &OI) const { return Ranges == OI.Ranges; }
5185 
5186     /// Merge the ranges in \p RHS into the current ranges.
5187     /// - Merging a list of  unknown ranges makes the current list unknown.
5188     /// - Ranges with the same offset are merged according to RangeTy::operator&
5189     /// \return true if the current RangeList changed.
5190     bool merge(const RangeList &RHS) {
5191       if (isUnknown())
5192         return false;
5193       if (RHS.isUnknown()) {
5194         setUnknown();
5195         return true;
5196       }
5197 
5198       if (Ranges.empty()) {
5199         Ranges = RHS.Ranges;
5200         return true;
5201       }
5202 
5203       bool Changed = false;
5204       auto LPos = Ranges.begin();
5205       for (auto &R : RHS.Ranges) {
5206         auto Result = insert(LPos, R);
5207         if (isUnknown())
5208           return true;
5209         LPos = Result.first;
5210         Changed |= Result.second;
5211       }
5212       return Changed;
5213     }
5214 
5215     /// Insert \p R at the given iterator \p Pos, and merge if necessary.
5216     ///
5217     /// This assumes that all ranges before \p Pos are OffsetLessThan \p R, and
5218     /// then maintains the sorted order for the suffix list.
5219     ///
5220     /// \return The place of insertion and true iff anything changed.
5221     std::pair<iterator, bool> insert(iterator Pos, const RangeTy &R) {
5222       if (isUnknown())
5223         return std::make_pair(Ranges.begin(), false);
5224       if (R.offsetOrSizeAreUnknown()) {
5225         return std::make_pair(setUnknown(), true);
5226       }
5227 
5228       // Maintain this as a sorted vector of unique entries.
5229       auto LB = std::lower_bound(Pos, Ranges.end(), R, RangeTy::OffsetLessThan);
5230       if (LB == Ranges.end() || LB->Offset != R.Offset)
5231         return std::make_pair(Ranges.insert(LB, R), true);
5232       bool Changed = *LB != R;
5233       *LB &= R;
5234       if (LB->offsetOrSizeAreUnknown())
5235         return std::make_pair(setUnknown(), true);
5236       return std::make_pair(LB, Changed);
5237     }
5238 
5239     /// Insert the given range \p R, maintaining sorted order.
5240     ///
5241     /// \return The place of insertion and true iff anything changed.
5242     std::pair<iterator, bool> insert(const RangeTy &R) {
5243       return insert(Ranges.begin(), R);
5244     }
5245 
5246     /// Add the increment \p Inc to the offset of every range.
5247     void addToAllOffsets(int64_t Inc) {
5248       assert(!isUnassigned() &&
5249              "Cannot increment if the offset is not yet computed!");
5250       if (isUnknown())
5251         return;
5252       for (auto &R : Ranges) {
5253         R.Offset += Inc;
5254       }
5255     }
5256 
5257     /// Return true iff there is exactly one range and it is known.
5258     bool isUnique() const {
5259       return Ranges.size() == 1 && !Ranges.front().offsetOrSizeAreUnknown();
5260     }
5261 
5262     /// Return the unique range, assuming it exists.
5263     const RangeTy &getUnique() const {
5264       assert(isUnique() && "No unique range to return!");
5265       return Ranges.front();
5266     }
5267 
5268     /// Return true iff the list contains an unknown range.
5269     bool isUnknown() const {
5270       if (isUnassigned())
5271         return false;
5272       if (Ranges.front().offsetOrSizeAreUnknown()) {
5273         assert(Ranges.size() == 1 && "Unknown is a singleton range.");
5274         return true;
5275       }
5276       return false;
5277     }
5278 
5279     /// Discard all ranges and insert a single unknown range.
5280     iterator setUnknown() {
5281       Ranges.clear();
5282       Ranges.push_back(RangeTy::getUnknown());
5283       return Ranges.begin();
5284     }
5285 
5286     /// Return true if no ranges have been inserted.
5287     bool isUnassigned() const { return Ranges.size() == 0; }
5288   };
5289 
5290   /// An access description.
5291   struct Access {
5292     Access(Instruction *I, int64_t Offset, int64_t Size,
5293            std::optional<Value *> Content, AccessKind Kind, Type *Ty)
5294         : LocalI(I), RemoteI(I), Content(Content), Ranges(Offset, Size),
5295           Kind(Kind), Ty(Ty) {
5296       verify();
5297     }
5298     Access(Instruction *LocalI, Instruction *RemoteI, const RangeList &Ranges,
5299            std::optional<Value *> Content, AccessKind K, Type *Ty)
5300         : LocalI(LocalI), RemoteI(RemoteI), Content(Content), Ranges(Ranges),
5301           Kind(K), Ty(Ty) {
5302       if (Ranges.size() > 1) {
5303         Kind = AccessKind(Kind | AK_MAY);
5304         Kind = AccessKind(Kind & ~AK_MUST);
5305       }
5306       verify();
5307     }
5308     Access(Instruction *LocalI, Instruction *RemoteI, int64_t Offset,
5309            int64_t Size, std::optional<Value *> Content, AccessKind Kind,
5310            Type *Ty)
5311         : LocalI(LocalI), RemoteI(RemoteI), Content(Content),
5312           Ranges(Offset, Size), Kind(Kind), Ty(Ty) {
5313       verify();
5314     }
5315     Access(const Access &Other) = default;
5316 
5317     Access &operator=(const Access &Other) = default;
5318     bool operator==(const Access &R) const {
5319       return LocalI == R.LocalI && RemoteI == R.RemoteI && Ranges == R.Ranges &&
5320              Content == R.Content && Kind == R.Kind;
5321     }
5322     bool operator!=(const Access &R) const { return !(*this == R); }
5323 
5324     Access &operator&=(const Access &R) {
5325       assert(RemoteI == R.RemoteI && "Expected same instruction!");
5326       assert(LocalI == R.LocalI && "Expected same instruction!");
5327 
5328       // Note that every Access object corresponds to a unique Value, and only
5329       // accesses to the same Value are merged. Hence we assume that all ranges
5330       // are the same size. If ranges can be different size, then the contents
5331       // must be dropped.
5332       Ranges.merge(R.Ranges);
5333       Content =
5334           AA::combineOptionalValuesInAAValueLatice(Content, R.Content, Ty);
5335 
5336       // Combine the access kind, which results in a bitwise union.
5337       // If there is more than one range, then this must be a MAY.
5338       // If we combine a may and a must access we clear the must bit.
5339       Kind = AccessKind(Kind | R.Kind);
5340       if ((Kind & AK_MAY) || Ranges.size() > 1) {
5341         Kind = AccessKind(Kind | AK_MAY);
5342         Kind = AccessKind(Kind & ~AK_MUST);
5343       }
5344       verify();
5345       return *this;
5346     }
5347 
5348     void verify() {
5349       assert(isMustAccess() + isMayAccess() == 1 &&
5350              "Expect must or may access, not both.");
5351       assert(isAssumption() + isWrite() <= 1 &&
5352              "Expect assumption access or write access, never both.");
5353       assert((isMayAccess() || Ranges.size() == 1) &&
5354              "Cannot be a must access if there are multiple ranges.");
5355     }
5356 
5357     /// Return the access kind.
5358     AccessKind getKind() const { return Kind; }
5359 
5360     /// Return true if this is a read access.
5361     bool isRead() const { return Kind & AK_R; }
5362 
5363     /// Return true if this is a write access.
5364     bool isWrite() const { return Kind & AK_W; }
5365 
5366     /// Return true if this is a write access.
5367     bool isWriteOrAssumption() const { return isWrite() || isAssumption(); }
5368 
5369     /// Return true if this is an assumption access.
5370     bool isAssumption() const { return Kind == AK_ASSUMPTION; }
5371 
5372     bool isMustAccess() const {
5373       bool MustAccess = Kind & AK_MUST;
5374       assert((!MustAccess || Ranges.size() < 2) &&
5375              "Cannot be a must access if there are multiple ranges.");
5376       return MustAccess;
5377     }
5378 
5379     bool isMayAccess() const {
5380       bool MayAccess = Kind & AK_MAY;
5381       assert((MayAccess || Ranges.size() < 2) &&
5382              "Cannot be a must access if there are multiple ranges.");
5383       return MayAccess;
5384     }
5385 
5386     /// Return the instruction that causes the access with respect to the local
5387     /// scope of the associated attribute.
5388     Instruction *getLocalInst() const { return LocalI; }
5389 
5390     /// Return the actual instruction that causes the access.
5391     Instruction *getRemoteInst() const { return RemoteI; }
5392 
5393     /// Return true if the value written is not known yet.
5394     bool isWrittenValueYetUndetermined() const { return !Content; }
5395 
5396     /// Return true if the value written cannot be determined at all.
5397     bool isWrittenValueUnknown() const {
5398       return Content.has_value() && !*Content;
5399     }
5400 
5401     /// Set the value written to nullptr, i.e., unknown.
5402     void setWrittenValueUnknown() { Content = nullptr; }
5403 
5404     /// Return the type associated with the access, if known.
5405     Type *getType() const { return Ty; }
5406 
5407     /// Return the value writen, if any.
5408     Value *getWrittenValue() const {
5409       assert(!isWrittenValueYetUndetermined() &&
5410              "Value needs to be determined before accessing it.");
5411       return *Content;
5412     }
5413 
5414     /// Return the written value which can be `llvm::null` if it is not yet
5415     /// determined.
5416     std::optional<Value *> getContent() const { return Content; }
5417 
5418     bool hasUniqueRange() const { return Ranges.isUnique(); }
5419     const AA::RangeTy &getUniqueRange() const { return Ranges.getUnique(); }
5420 
5421     /// Add a range accessed by this Access.
5422     ///
5423     /// If there are multiple ranges, then this is a "may access".
5424     void addRange(int64_t Offset, int64_t Size) {
5425       Ranges.insert({Offset, Size});
5426       if (!hasUniqueRange()) {
5427         Kind = AccessKind(Kind | AK_MAY);
5428         Kind = AccessKind(Kind & ~AK_MUST);
5429       }
5430     }
5431 
5432     const RangeList &getRanges() const { return Ranges; }
5433 
5434     using const_iterator = RangeList::const_iterator;
5435     const_iterator begin() const { return Ranges.begin(); }
5436     const_iterator end() const { return Ranges.end(); }
5437 
5438   private:
5439     /// The instruction responsible for the access with respect to the local
5440     /// scope of the associated attribute.
5441     Instruction *LocalI;
5442 
5443     /// The instruction responsible for the access.
5444     Instruction *RemoteI;
5445 
5446     /// The value written, if any. `llvm::none` means "not known yet", `nullptr`
5447     /// cannot be determined.
5448     std::optional<Value *> Content;
5449 
5450     /// Set of potential ranges accessed from the base pointer.
5451     RangeList Ranges;
5452 
5453     /// The access kind, e.g., READ, as bitset (could be more than one).
5454     AccessKind Kind;
5455 
5456     /// The type of the content, thus the type read/written, can be null if not
5457     /// available.
5458     Type *Ty;
5459   };
5460 
5461   /// Create an abstract attribute view for the position \p IRP.
5462   static AAPointerInfo &createForPosition(const IRPosition &IRP, Attributor &A);
5463 
5464   /// See AbstractAttribute::getName()
5465   const std::string getName() const override { return "AAPointerInfo"; }
5466 
5467   /// See AbstractAttribute::getIdAddr()
5468   const char *getIdAddr() const override { return &ID; }
5469 
5470   /// Call \p CB on all accesses that might interfere with \p Range and return
5471   /// true if all such accesses were known and the callback returned true for
5472   /// all of them, false otherwise. An access interferes with an offset-size
5473   /// pair if it might read or write that memory region.
5474   virtual bool forallInterferingAccesses(
5475       AA::RangeTy Range, function_ref<bool(const Access &, bool)> CB) const = 0;
5476 
5477   /// Call \p CB on all accesses that might interfere with \p I and
5478   /// return true if all such accesses were known and the callback returned true
5479   /// for all of them, false otherwise. In contrast to forallInterferingAccesses
5480   /// this function will perform reasoning to exclude write accesses that cannot
5481   /// affect the load even if they on the surface look as if they would. The
5482   /// flag \p HasBeenWrittenTo will be set to true if we know that \p I does not
5483   /// read the intial value of the underlying memory.
5484   virtual bool forallInterferingAccesses(
5485       Attributor &A, const AbstractAttribute &QueryingAA, Instruction &I,
5486       function_ref<bool(const Access &, bool)> CB, bool &HasBeenWrittenTo,
5487       AA::RangeTy &Range) const = 0;
5488 
5489   /// This function should return true if the type of the \p AA is AAPointerInfo
5490   static bool classof(const AbstractAttribute *AA) {
5491     return (AA->getIdAddr() == &ID);
5492   }
5493 
5494   /// Unique ID (due to the unique address)
5495   static const char ID;
5496 };
5497 
5498 /// An abstract attribute for getting assumption information.
5499 struct AAAssumptionInfo
5500     : public StateWrapper<SetState<StringRef>, AbstractAttribute,
5501                           DenseSet<StringRef>> {
5502   using Base =
5503       StateWrapper<SetState<StringRef>, AbstractAttribute, DenseSet<StringRef>>;
5504 
5505   AAAssumptionInfo(const IRPosition &IRP, Attributor &A,
5506                    const DenseSet<StringRef> &Known)
5507       : Base(IRP, Known) {}
5508 
5509   /// Returns true if the assumption set contains the assumption \p Assumption.
5510   virtual bool hasAssumption(const StringRef Assumption) const = 0;
5511 
5512   /// Create an abstract attribute view for the position \p IRP.
5513   static AAAssumptionInfo &createForPosition(const IRPosition &IRP,
5514                                              Attributor &A);
5515 
5516   /// See AbstractAttribute::getName()
5517   const std::string getName() const override { return "AAAssumptionInfo"; }
5518 
5519   /// See AbstractAttribute::getIdAddr()
5520   const char *getIdAddr() const override { return &ID; }
5521 
5522   /// This function should return true if the type of the \p AA is
5523   /// AAAssumptionInfo
5524   static bool classof(const AbstractAttribute *AA) {
5525     return (AA->getIdAddr() == &ID);
5526   }
5527 
5528   /// Unique ID (due to the unique address)
5529   static const char ID;
5530 };
5531 
5532 /// An abstract attribute for getting all assumption underlying objects.
5533 struct AAUnderlyingObjects : AbstractAttribute {
5534   AAUnderlyingObjects(const IRPosition &IRP) : AbstractAttribute(IRP) {}
5535 
5536   /// Create an abstract attribute biew for the position \p IRP.
5537   static AAUnderlyingObjects &createForPosition(const IRPosition &IRP,
5538                                                 Attributor &A);
5539 
5540   /// See AbstractAttribute::getName()
5541   const std::string getName() const override { return "AAUnderlyingObjects"; }
5542 
5543   /// See AbstractAttribute::getIdAddr()
5544   const char *getIdAddr() const override { return &ID; }
5545 
5546   /// This function should return true if the type of the \p AA is
5547   /// AAUnderlyingObjects.
5548   static bool classof(const AbstractAttribute *AA) {
5549     return (AA->getIdAddr() == &ID);
5550   }
5551 
5552   /// Unique ID (due to the unique address)
5553   static const char ID;
5554 
5555   /// Check \p Pred on all underlying objects in \p Scope collected so far.
5556   ///
5557   /// This method will evaluate \p Pred on all underlying objects in \p Scope
5558   /// collected so far and return true if \p Pred holds on all of them.
5559   virtual bool
5560   forallUnderlyingObjects(function_ref<bool(Value &)> Pred,
5561                           AA::ValueScope Scope = AA::Interprocedural) const = 0;
5562 };
5563 
5564 raw_ostream &operator<<(raw_ostream &, const AAPointerInfo::Access &);
5565 
5566 /// Run options, used by the pass manager.
5567 enum AttributorRunOption {
5568   NONE = 0,
5569   MODULE = 1 << 0,
5570   CGSCC = 1 << 1,
5571   ALL = MODULE | CGSCC
5572 };
5573 
5574 } // end namespace llvm
5575 
5576 #endif // LLVM_TRANSFORMS_IPO_ATTRIBUTOR_H
5577