1
2.. _gmir-opcodes:
3
4Generic Opcodes
5===============
6
7.. contents::
8   :local:
9
10.. note::
11
12  This documentation does not yet fully account for vectors. Many of the
13  scalar/integer/floating-point operations can also take vectors.
14
15Constants
16---------
17
18G_IMPLICIT_DEF
19^^^^^^^^^^^^^^
20
21An undefined value.
22
23.. code-block:: none
24
25  %0:_(s32) = G_IMPLICIT_DEF
26
27G_CONSTANT
28^^^^^^^^^^
29
30An integer constant.
31
32.. code-block:: none
33
34  %0:_(s32) = G_CONSTANT i32 1
35
36G_FCONSTANT
37^^^^^^^^^^^
38
39A floating point constant.
40
41.. code-block:: none
42
43  %0:_(s32) = G_FCONSTANT float 1.0
44
45G_FRAME_INDEX
46^^^^^^^^^^^^^
47
48The address of an object in the stack frame.
49
50.. code-block:: none
51
52  %1:_(p0) = G_FRAME_INDEX %stack.0.ptr0
53
54G_GLOBAL_VALUE
55^^^^^^^^^^^^^^
56
57The address of a global value.
58
59.. code-block:: none
60
61  %0(p0) = G_GLOBAL_VALUE @var_local
62
63G_BLOCK_ADDR
64^^^^^^^^^^^^
65
66The address of a basic block.
67
68.. code-block:: none
69
70  %0:_(p0) = G_BLOCK_ADDR blockaddress(@test_blockaddress, %ir-block.block)
71
72Integer Extension and Truncation
73--------------------------------
74
75G_ANYEXT
76^^^^^^^^
77
78Extend the underlying scalar type of an operation, leaving the high bits
79unspecified.
80
81.. code-block:: none
82
83  %1:_(s32) = G_ANYEXT %0:_(s16)
84
85G_SEXT
86^^^^^^
87
88Sign extend the underlying scalar type of an operation, copying the sign bit
89into the newly-created space.
90
91.. code-block:: none
92
93  %1:_(s32) = G_SEXT %0:_(s16)
94
95G_SEXT_INREG
96^^^^^^^^^^^^
97
98Sign extend the value from an arbitrary bit position, copying the sign bit
99into all bits above it. This is equivalent to a shl + ashr pair with an
100appropriate shift amount. $sz is an immediate (MachineOperand::isImm()
101returns true) to allow targets to have some bitwidths legal and others
102lowered. This opcode is particularly useful if the target has sign-extension
103instructions that are cheaper than the constituent shifts as the optimizer is
104able to make decisions on whether it's better to hang on to the G_SEXT_INREG
105or to lower it and optimize the individual shifts.
106
107.. code-block:: none
108
109  %1:_(s32) = G_SEXT_INREG %0:_(s32), 16
110
111G_ZEXT
112^^^^^^
113
114Zero extend the underlying scalar type of an operation, putting zero bits
115into the newly-created space.
116
117.. code-block:: none
118
119  %1:_(s32) = G_ZEXT %0:_(s16)
120
121G_TRUNC
122^^^^^^^
123
124Truncate the underlying scalar type of an operation. This is equivalent to
125G_EXTRACT for scalar types, but acts elementwise on vectors.
126
127.. code-block:: none
128
129  %1:_(s16) = G_TRUNC %0:_(s32)
130
131Type Conversions
132----------------
133
134G_INTTOPTR
135^^^^^^^^^^
136
137Convert an integer to a pointer.
138
139.. code-block:: none
140
141  %1:_(p0) = G_INTTOPTR %0:_(s32)
142
143G_PTRTOINT
144^^^^^^^^^^
145
146Convert a pointer to an integer.
147
148.. code-block:: none
149
150  %1:_(s32) = G_PTRTOINT %0:_(p0)
151
152G_BITCAST
153^^^^^^^^^
154
155Reinterpret a value as a new type. This is usually done without
156changing any bits but this is not always the case due a subtlety in the
157definition of the :ref:`LLVM-IR Bitcast Instruction <i_bitcast>`. It
158is allowed to bitcast between pointers with the same size, but
159different address spaces.
160
161.. code-block:: none
162
163  %1:_(s64) = G_BITCAST %0:_(<2 x s32>)
164
165G_ADDRSPACE_CAST
166^^^^^^^^^^^^^^^^
167
168Convert a pointer to an address space to a pointer to another address space.
169
170.. code-block:: none
171
172  %1:_(p1) = G_ADDRSPACE_CAST %0:_(p0)
173
174.. caution::
175
176  :ref:`i_addrspacecast` doesn't mention what happens if the cast is simply
177  invalid (i.e. if the address spaces are disjoint).
178
179Scalar Operations
180-----------------
181
182G_EXTRACT
183^^^^^^^^^
184
185Extract a register of the specified size, starting from the block given by
186index. This will almost certainly be mapped to sub-register COPYs after
187register banks have been selected.
188
189.. code-block:: none
190
191  %3:_(s32) = G_EXTRACT %2:_(s64), 32
192
193G_INSERT
194^^^^^^^^
195
196Insert a smaller register into a larger one at the specified bit-index.
197
198.. code-block:: none
199
200  %2:_(s64) = G_INSERT %0:(_s64), %1:_(s32), 0
201
202G_MERGE_VALUES
203^^^^^^^^^^^^^^
204
205Concatenate multiple registers of the same size into a wider register.
206The input operands are always ordered from lowest bits to highest:
207
208.. code-block:: none
209
210  %0:(s32) = G_MERGE_VALUES %bits_0_7:(s8), %bits_8_15:(s8),
211                            %bits_16_23:(s8), %bits_24_31:(s8)
212
213G_UNMERGE_VALUES
214^^^^^^^^^^^^^^^^
215
216Extract multiple registers of the specified size, starting from blocks given by
217indexes. This will almost certainly be mapped to sub-register COPYs after
218register banks have been selected.
219The output operands are always ordered from lowest bits to highest:
220
221.. code-block:: none
222
223  %bits_0_7:(s8), %bits_8_15:(s8),
224      %bits_16_23:(s8), %bits_24_31:(s8) = G_UNMERGE_VALUES %0:(s32)
225
226G_BSWAP
227^^^^^^^
228
229Reverse the order of the bytes in a scalar.
230
231.. code-block:: none
232
233  %1:_(s32) = G_BSWAP %0:_(s32)
234
235G_BITREVERSE
236^^^^^^^^^^^^
237
238Reverse the order of the bits in a scalar.
239
240.. code-block:: none
241
242  %1:_(s32) = G_BITREVERSE %0:_(s32)
243
244G_SBFX, G_UBFX
245^^^^^^^^^^^^^^
246
247Extract a range of bits from a register.
248
249The source operands are registers as follows:
250
251- Source
252- The least-significant bit for the extraction
253- The width of the extraction
254
255The least-significant bit (lsb) and width operands are in the range:
256
257::
258
259      0 <= lsb < lsb + width <= source bitwidth, where all values are unsigned
260
261G_SBFX sign-extends the result, while G_UBFX zero-extends the result.
262
263.. code-block:: none
264
265  ; Extract 5 bits starting at bit 1 from %x and store them in %a.
266  ; Sign-extend the result.
267  ;
268  ; Example:
269  ; %x = 0...0000[10110]1 ---> %a = 1...111111[10110]
270  %lsb_one = G_CONSTANT i32 1
271  %width_five = G_CONSTANT i32 5
272  %a:_(s32) = G_SBFX %x, %lsb_one, %width_five
273
274  ; Extract 3 bits starting at bit 2 from %x and store them in %b. Zero-extend
275  ; the result.
276  ;
277  ; Example:
278  ; %x = 1...11111[100]11 ---> %b = 0...00000[100]
279  %lsb_two = G_CONSTANT i32 2
280  %width_three = G_CONSTANT i32 3
281  %b:_(s32) = G_UBFX %x, %lsb_two, %width_three
282
283Integer Operations
284-------------------
285
286G_ADD, G_SUB, G_MUL, G_AND, G_OR, G_XOR, G_SDIV, G_UDIV, G_SREM, G_UREM
287^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
288
289These each perform their respective integer arithmetic on a scalar.
290
291.. code-block:: none
292
293  %2:_(s32) = G_ADD %0:_(s32), %1:_(s32)
294
295G_SDIVREM, G_UDIVREM
296^^^^^^^^^^^^^^^^^^^^
297
298Perform integer division and remainder thereby producing two results.
299
300.. code-block:: none
301
302  %div:_(s32), %rem:_(s32) = G_SDIVREM %0:_(s32), %1:_(s32)
303
304G_SADDSAT, G_UADDSAT, G_SSUBSAT, G_USUBSAT, G_SSHLSAT, G_USHLSAT
305^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
306
307Signed and unsigned addition, subtraction and left shift with saturation.
308
309.. code-block:: none
310
311  %2:_(s32) = G_SADDSAT %0:_(s32), %1:_(s32)
312
313G_SHL, G_LSHR, G_ASHR
314^^^^^^^^^^^^^^^^^^^^^
315
316Shift the bits of a scalar left or right inserting zeros (sign-bit for G_ASHR).
317
318G_ROTR, G_ROTL
319^^^^^^^^^^^^^^
320
321Rotate the bits right (G_ROTR) or left (G_ROTL).
322
323G_ICMP
324^^^^^^
325
326Perform integer comparison producing non-zero (true) or zero (false). It's
327target specific whether a true value is 1, ~0U, or some other non-zero value.
328
329G_SELECT
330^^^^^^^^
331
332Select between two values depending on a zero/non-zero value.
333
334.. code-block:: none
335
336  %5:_(s32) = G_SELECT %4(s1), %6, %2
337
338G_PTR_ADD
339^^^^^^^^^
340
341Add a scalar offset in addressible units to a pointer. Addressible units are
342typically bytes but this may vary between targets.
343
344.. code-block:: none
345
346  %1:_(p0) = G_PTR_ADD %0:_(p0), %1:_(s32)
347
348.. caution::
349
350  There are currently no in-tree targets that use this with addressable units
351  not equal to 8 bit.
352
353G_PTRMASK
354^^^^^^^^^^
355
356Zero out an arbitrary mask of bits of a pointer. The mask type must be
357an integer, and the number of vector elements must match for all
358operands. This corresponds to `i_intr_llvm_ptrmask`.
359
360.. code-block:: none
361
362  %2:_(p0) = G_PTRMASK %0, %1
363
364G_SMIN, G_SMAX, G_UMIN, G_UMAX
365^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
366
367Take the minimum/maximum of two values.
368
369.. code-block:: none
370
371  %5:_(s32) = G_SMIN %6, %2
372
373G_ABS
374^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
375
376Take the absolute value of a signed integer. The absolute value of the minimum
377negative value (e.g. the 8-bit value `0x80`) is defined to be itself.
378
379.. code-block:: none
380
381  %1:_(s32) = G_ABS %0
382
383G_UADDO, G_SADDO, G_USUBO, G_SSUBO, G_SMULO, G_UMULO
384^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
385
386Perform the requested arithmetic and produce a carry output in addition to the
387normal result.
388
389.. code-block:: none
390
391  %3:_(s32), %4:_(s1) = G_UADDO %0, %1
392
393G_UADDE, G_SADDE, G_USUBE, G_SSUBE
394^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
395
396Perform the requested arithmetic and consume a carry input in addition to the
397normal input. Also produce a carry output in addition to the normal result.
398
399.. code-block:: none
400
401  %4:_(s32), %5:_(s1) = G_UADDE %0, %1, %3:_(s1)
402
403G_UMULH, G_SMULH
404^^^^^^^^^^^^^^^^
405
406Multiply two numbers at twice the incoming bit width (signed) and return
407the high half of the result.
408
409.. code-block:: none
410
411  %3:_(s32) = G_UMULH %0, %1
412
413G_CTLZ, G_CTTZ, G_CTPOP
414^^^^^^^^^^^^^^^^^^^^^^^
415
416Count leading zeros, trailing zeros, or number of set bits.
417
418.. code-block:: none
419
420  %2:_(s33) = G_CTLZ_ZERO_UNDEF %1
421  %2:_(s33) = G_CTTZ_ZERO_UNDEF %1
422  %2:_(s33) = G_CTPOP %1
423
424G_CTLZ_ZERO_UNDEF, G_CTTZ_ZERO_UNDEF
425^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
426
427Count leading zeros or trailing zeros. If the value is zero then the result is
428undefined.
429
430.. code-block:: none
431
432  %2:_(s33) = G_CTLZ_ZERO_UNDEF %1
433  %2:_(s33) = G_CTTZ_ZERO_UNDEF %1
434
435Floating Point Operations
436-------------------------
437
438G_FCMP
439^^^^^^
440
441Perform floating point comparison producing non-zero (true) or zero
442(false). It's target specific whether a true value is 1, ~0U, or some other
443non-zero value.
444
445G_FNEG
446^^^^^^
447
448Floating point negation.
449
450G_FPEXT
451^^^^^^^
452
453Convert a floating point value to a larger type.
454
455G_FPTRUNC
456^^^^^^^^^
457
458Convert a floating point value to a narrower type.
459
460G_FPTOSI, G_FPTOUI, G_SITOFP, G_UITOFP
461^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
462
463Convert between integer and floating point.
464
465G_FABS
466^^^^^^
467
468Take the absolute value of a floating point value.
469
470G_FCOPYSIGN
471^^^^^^^^^^^
472
473Copy the value of the first operand, replacing the sign bit with that of the
474second operand.
475
476G_FCANONICALIZE
477^^^^^^^^^^^^^^^
478
479See :ref:`i_intr_llvm_canonicalize`.
480
481G_FMINNUM
482^^^^^^^^^
483
484Perform floating-point minimum on two values.
485
486In the case where a single input is a NaN (either signaling or quiet),
487the non-NaN input is returned.
488
489The return value of (FMINNUM 0.0, -0.0) could be either 0.0 or -0.0.
490
491G_FMAXNUM
492^^^^^^^^^
493
494Perform floating-point maximum on two values.
495
496In the case where a single input is a NaN (either signaling or quiet),
497the non-NaN input is returned.
498
499The return value of (FMAXNUM 0.0, -0.0) could be either 0.0 or -0.0.
500
501G_FMINNUM_IEEE
502^^^^^^^^^^^^^^
503
504Perform floating-point minimum on two values, following the IEEE-754 2008
505definition. This differs from FMINNUM in the handling of signaling NaNs. If one
506input is a signaling NaN, returns a quiet NaN.
507
508G_FMAXNUM_IEEE
509^^^^^^^^^^^^^^
510
511Perform floating-point maximum on two values, following the IEEE-754 2008
512definition. This differs from FMAXNUM in the handling of signaling NaNs. If one
513input is a signaling NaN, returns a quiet NaN.
514
515G_FMINIMUM
516^^^^^^^^^^
517
518NaN-propagating minimum that also treat -0.0 as less than 0.0. While
519FMINNUM_IEEE follow IEEE 754-2008 semantics, FMINIMUM follows IEEE 754-2018
520draft semantics.
521
522G_FMAXIMUM
523^^^^^^^^^^
524
525NaN-propagating maximum that also treat -0.0 as less than 0.0. While
526FMAXNUM_IEEE follow IEEE 754-2008 semantics, FMAXIMUM follows IEEE 754-2018
527draft semantics.
528
529G_FADD, G_FSUB, G_FMUL, G_FDIV, G_FREM
530^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
531
532Perform the specified floating point arithmetic.
533
534G_FMA
535^^^^^
536
537Perform a fused multiply add (i.e. without the intermediate rounding step).
538
539G_FMAD
540^^^^^^
541
542Perform a non-fused multiply add (i.e. with the intermediate rounding step).
543
544G_FPOW
545^^^^^^
546
547Raise the first operand to the power of the second.
548
549G_FEXP, G_FEXP2
550^^^^^^^^^^^^^^^
551
552Calculate the base-e or base-2 exponential of a value
553
554G_FLOG, G_FLOG2, G_FLOG10
555^^^^^^^^^^^^^^^^^^^^^^^^^
556
557Calculate the base-e, base-2, or base-10 respectively.
558
559G_FCEIL, G_FCOS, G_FSIN, G_FSQRT, G_FFLOOR, G_FRINT, G_FNEARBYINT
560^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
561
562These correspond to the standard C functions of the same name.
563
564G_INTRINSIC_TRUNC
565^^^^^^^^^^^^^^^^^
566
567Returns the operand rounded to the nearest integer not larger in magnitude than the operand.
568
569G_INTRINSIC_ROUND
570^^^^^^^^^^^^^^^^^
571
572Returns the operand rounded to the nearest integer.
573
574Vector Specific Operations
575--------------------------
576
577G_CONCAT_VECTORS
578^^^^^^^^^^^^^^^^
579
580Concatenate two vectors to form a longer vector.
581
582G_BUILD_VECTOR, G_BUILD_VECTOR_TRUNC
583^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
584
585Create a vector from multiple scalar registers. No implicit
586conversion is performed (i.e. the result element type must be the
587same as all source operands)
588
589The _TRUNC version truncates the larger operand types to fit the
590destination vector elt type.
591
592G_INSERT_VECTOR_ELT
593^^^^^^^^^^^^^^^^^^^
594
595Insert an element into a vector
596
597G_EXTRACT_VECTOR_ELT
598^^^^^^^^^^^^^^^^^^^^
599
600Extract an element from a vector
601
602G_SHUFFLE_VECTOR
603^^^^^^^^^^^^^^^^
604
605Concatenate two vectors and shuffle the elements according to the mask operand.
606The mask operand should be an IR Constant which exactly matches the
607corresponding mask for the IR shufflevector instruction.
608
609Vector Reduction Operations
610---------------------------
611
612These operations represent horizontal vector reduction, producing a scalar result.
613
614G_VECREDUCE_SEQ_FADD, G_VECREDUCE_SEQ_FMUL
615^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
616
617The SEQ variants perform reductions in sequential order. The first operand is
618an initial scalar accumulator value, and the second operand is the vector to reduce.
619
620G_VECREDUCE_FADD, G_VECREDUCE_FMUL
621^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
622
623These reductions are relaxed variants which may reduce the elements in any order.
624
625G_VECREDUCE_FMAX, G_VECREDUCE_FMIN
626^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
627
628FMIN/FMAX nodes can have flags, for NaN/NoNaN variants.
629
630
631Integer/bitwise reductions
632^^^^^^^^^^^^^^^^^^^^^^^^^^
633
634* G_VECREDUCE_ADD
635* G_VECREDUCE_MUL
636* G_VECREDUCE_AND
637* G_VECREDUCE_OR
638* G_VECREDUCE_XOR
639* G_VECREDUCE_SMAX
640* G_VECREDUCE_SMIN
641* G_VECREDUCE_UMAX
642* G_VECREDUCE_UMIN
643
644Integer reductions may have a result type larger than the vector element type.
645However, the reduction is performed using the vector element type and the value
646in the top bits is unspecified.
647
648Memory Operations
649-----------------
650
651G_LOAD, G_SEXTLOAD, G_ZEXTLOAD
652^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
653
654Generic load. Expects a MachineMemOperand in addition to explicit
655operands. If the result size is larger than the memory size, the
656high bits are undefined, sign-extended, or zero-extended respectively.
657
658Only G_LOAD is valid if the result is a vector type. If the result is larger
659than the memory size, the high elements are undefined (i.e. this is not a
660per-element, vector anyextload)
661
662G_INDEXED_LOAD
663^^^^^^^^^^^^^^
664
665Generic indexed load. Combines a GEP with a load. $newaddr is set to $base + $offset.
666If $am is 0 (post-indexed), then the value is loaded from $base; if $am is 1 (pre-indexed)
667then the value is loaded from $newaddr.
668
669G_INDEXED_SEXTLOAD
670^^^^^^^^^^^^^^^^^^
671
672Same as G_INDEXED_LOAD except that the load performed is sign-extending, as with G_SEXTLOAD.
673
674G_INDEXED_ZEXTLOAD
675^^^^^^^^^^^^^^^^^^
676
677Same as G_INDEXED_LOAD except that the load performed is zero-extending, as with G_ZEXTLOAD.
678
679G_STORE
680^^^^^^^
681
682Generic store. Expects a MachineMemOperand in addition to explicit
683operands. If the stored value size is greater than the memory size,
684the high bits are implicitly truncated. If this is a vector store, the
685high elements are discarded (i.e. this does not function as a per-lane
686vector, truncating store)
687
688G_INDEXED_STORE
689^^^^^^^^^^^^^^^
690
691Combines a store with a GEP. See description of G_INDEXED_LOAD for indexing behaviour.
692
693G_ATOMIC_CMPXCHG_WITH_SUCCESS
694^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
695
696Generic atomic cmpxchg with internal success check. Expects a
697MachineMemOperand in addition to explicit operands.
698
699G_ATOMIC_CMPXCHG
700^^^^^^^^^^^^^^^^
701
702Generic atomic cmpxchg. Expects a MachineMemOperand in addition to explicit
703operands.
704
705G_ATOMICRMW_XCHG, G_ATOMICRMW_ADD, G_ATOMICRMW_SUB, G_ATOMICRMW_AND, G_ATOMICRMW_NAND, G_ATOMICRMW_OR, G_ATOMICRMW_XOR, G_ATOMICRMW_MAX, G_ATOMICRMW_MIN, G_ATOMICRMW_UMAX, G_ATOMICRMW_UMIN, G_ATOMICRMW_FADD, G_ATOMICRMW_FSUB
706^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
707
708Generic atomicrmw. Expects a MachineMemOperand in addition to explicit
709operands.
710
711G_FENCE
712^^^^^^^
713
714.. caution::
715
716  I couldn't find any documentation on this at the time of writing.
717
718G_MEMCPY
719^^^^^^^^
720
721Generic memcpy. Expects two MachineMemOperands covering the store and load
722respectively, in addition to explicit operands.
723
724G_MEMCPY_INLINE
725^^^^^^^^^^^^^^^
726
727Generic inlined memcpy. Like G_MEMCPY, but it is guaranteed that this version
728will not be lowered as a call to an external function. Currently the size
729operand is required to evaluate as a constant (not an immediate), though that is
730expected to change when llvm.memcpy.inline is taught to support dynamic sizes.
731
732G_MEMMOVE
733^^^^^^^^^
734
735Generic memmove. Similar to G_MEMCPY, but the source and destination memory
736ranges are allowed to overlap.
737
738G_MEMSET
739^^^^^^^^
740
741Generic memset. Expects a MachineMemOperand in addition to explicit operands.
742
743G_BZERO
744^^^^^^^
745
746Generic bzero. Expects a MachineMemOperand in addition to explicit operands.
747
748Control Flow
749------------
750
751G_PHI
752^^^^^
753
754Implement the φ node in the SSA graph representing the function.
755
756.. code-block:: none
757
758  %1(s8) = G_PHI %7(s8), %bb.0, %3(s8), %bb.1
759
760G_BR
761^^^^
762
763Unconditional branch
764
765G_BRCOND
766^^^^^^^^
767
768Conditional branch
769
770G_BRINDIRECT
771^^^^^^^^^^^^
772
773Indirect branch
774
775G_BRJT
776^^^^^^
777
778Indirect branch to jump table entry
779
780G_JUMP_TABLE
781^^^^^^^^^^^^
782
783.. caution::
784
785  I found no documentation for this instruction at the time of writing.
786
787G_INTRINSIC, G_INTRINSIC_W_SIDE_EFFECTS
788^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
789
790Call an intrinsic
791
792The _W_SIDE_EFFECTS version is considered to have unknown side-effects and
793as such cannot be reordered across other side-effecting instructions.
794
795.. note::
796
797  Unlike SelectionDAG, there is no _VOID variant. Both of these are permitted
798  to have zero, one, or multiple results.
799
800Variadic Arguments
801------------------
802
803G_VASTART
804^^^^^^^^^
805
806.. caution::
807
808  I found no documentation for this instruction at the time of writing.
809
810G_VAARG
811^^^^^^^
812
813.. caution::
814
815  I found no documentation for this instruction at the time of writing.
816
817Other Operations
818----------------
819
820G_DYN_STACKALLOC
821^^^^^^^^^^^^^^^^
822
823Dynamically realigns the stack pointer to the specified size and alignment.
824An alignment value of `0` or `1` means no specific alignment.
825
826.. code-block:: none
827
828  %8:_(p0) = G_DYN_STACKALLOC %7(s64), 32
829
830Optimization Hints
831------------------
832
833These instructions do not correspond to any target instructions. They act as
834hints for various combines.
835
836G_ASSERT_SEXT, G_ASSERT_ZEXT
837^^^^^^^^^^^^^^^^^^^^^^^^^^^^
838
839This signifies that the contents of a register were previously extended from a
840smaller type.
841
842The smaller type is denoted using an immediate operand. For scalars, this is the
843width of the entire smaller type. For vectors, this is the width of the smaller
844element type.
845
846.. code-block:: none
847
848  %x_was_zexted:_(s32) = G_ASSERT_ZEXT %x(s32), 16
849  %y_was_zexted:_(<2 x s32>) = G_ASSERT_ZEXT %y(<2 x s32>), 16
850
851  %z_was_sexted:_(s32) = G_ASSERT_SEXT %z(s32), 8
852
853G_ASSERT_SEXT and G_ASSERT_ZEXT act like copies, albeit with some restrictions.
854
855The source and destination registers must
856
857- Be virtual
858- Belong to the same register class
859- Belong to the same register bank
860
861It should always be safe to
862
863- Look through the source register
864- Replace the destination register with the source register
865