1
2* [absval](#absval)
3* [argmax](#argmax)
4* [batchnorm](#batchnorm)
5* [bias](#bias)
6* [binaryop](#binaryop)
7* [bnll](#bnll)
8* [cast](#cast)
9* [clip](#clip)
10* [concat](#concat)
11* [convolution](#convolution)
12* [convolutiondepthwise](#convolutiondepthwise)
13* [crop](#crop)
14* [dequantize](#dequantize)
15* [lstm](#lstm)
16* [pooling](#pooling)
17* [sigmoid](#sigmoid)
18* [softmax](#softmax)
19* [tanh](#tanh)
20
21# absval
22```
23y = abs(x)
24```
25
26* one_blob_only
27* support_inplace
28
29# argmax
30```
31y = argmax(x, out_max_val, topk)
32```
33
34* one_blob_only
35
36|param id|name|type|default|
37|--|--|--|--|
38|0|out_max_val|int|0|
39|1|topk|int|1|
40
41# batchnorm
42```
43y = (x - mean) / sqrt(var + eps) * slope + bias
44```
45
46* one_blob_only
47* support_inplace
48
49|param id|name|type|default|
50|--|--|--|--|
51|0|channels|int|0|
52|1|eps|float|0.f|
53
54|weight|type|
55|--|--|
56|slope_data|float|
57|mean_data|float|
58|var_data|float|
59|bias_data|float|
60
61# bias
62```
63y = x + bias
64```
65
66* one_blob_only
67* support_inplace
68
69|param id|name|type|default|
70|--|--|--|--|
71|0|bias_data_size|int|0|
72
73|weight|type|
74|--|--|
75|bias_data|float|
76
77# binaryop
78 This operation is used for binary computation, and the calculation rule depends on the [broadcasting rule](https://github.com/Tencent/ncnn/wiki/binaryop-broadcasting).
79```
80C = binaryop(A, B)
81```
82if with_scalar = 1:
83- one_blob_only
84- support_inplace
85
86|param id|name|type|default|description|
87|--|--|--|--|--|
88|0|op_type|int|0|Operation type as follows|
89|1|with_scalar|int|0|with_scalar=0 B is a matrix, with_scalar=1 B is a scalar|
90|2|b|float|0.f|When B is a scalar, B = b|
91
92Operation type:
93- 0 = ADD
94- 1 = SUB
95- 2 = MUL
96- 3 = DIV
97- 4 = MAX
98- 5 = MIN
99- 6 = POW
100- 7 = RSUB
101- 8 = RDIV
102
103# bnll
104```
105y = log(1 + e^(-x)) , x > 0
106y = log(1 + e^x),     x < 0
107```
108
109* one_blob_only
110* support_inplace
111
112# cast
113```
114y = cast(x)
115```
116
117* one_blob_only
118* support_packing
119
120|param id|name|type|default|
121|--|--|--|--|
122|0|type_from|int|0|
123|1|type_to|int|0|
124
125Element type:
126
127- 0 = auto
128- 1 = float32
129- 2 = float16
130- 3 = int8
131- 4 = bfloat16
132
133# clip
134```
135y = clamp(x, min, max)
136```
137
138* one_blob_only
139* support_inplace
140
141|param id|name|type|default|
142|--|--|--|--|
143|0|min|float|-FLT_MAX|
144|1|max|float|FLT_MAX|
145
146# concat
147```
148y = concat(x0, x1, x2, ...) by axis
149```
150
151|param id|name|type|default|
152|--|--|--|--|
153|0|axis|int|0|
154
155# convolution
156```
157x2 = pad(x, pads, pad_value)
158x3 = conv(x2, weight, kernel, stride, dilation) + bias
159y = activation(x3, act_type, act_params)
160```
161
162* one_blob_only
163
164|param id|name|type|default|
165|--|--|--|--|
166|0|num_output|int|0|
167|1|kernel_w|int|0|
168|2|dilation_w|int|1|
169|3|stride_w|int|1|
170|4|pad_left|int|0|
171|5|bias_term|int|0|
172|6|weight_data_size|int|0|
173|8|int8_scale_term|int|0|
174|9|activation_type|int|0|
175|10|activation_params|array|[ ]|
176|11|kernel_h|int|kernel_w|
177|12|dilation_h|int|dilation_w|
178|13|stride_h|int|stride_w|
179|15|pad_right|int|pad_left|
180|14|pad_top|int|pad_left|
181|16|pad_bottom|int|pad_top|
182|18|pad_value|float|0.f|
183
184|weight|type|
185|--|--|
186|weight_data|float/fp16/int8|
187|bias_data|float|
188
189# convolutiondepthwise
190```
191x2 = pad(x, pads, pad_value)
192x3 = conv(x2, weight, kernel, stride, dilation, group) + bias
193y = activation(x3, act_type, act_params)
194```
195
196* one_blob_only
197
198|param id|name|type|default|
199|--|--|--|--|
200|0|num_output|int|0|
201|1|kernel_w|int|0|
202|2|dilation_w|int|1|
203|3|stride_w|int|1|
204|4|pad_left|int|0|
205|5|bias_term|int|0|
206|6|weight_data_size|int|0|
207|7|group|int|1|
208|8|int8_scale_term|int|0|
209|9|activation_type|int|0|
210|10|activation_params|array|[ ]|
211|11|kernel_h|int|kernel_w|
212|12|dilation_h|int|dilation_w|
213|13|stride_h|int|stride_w|
214|15|pad_right|int|pad_left|
215|14|pad_top|int|pad_left|
216|16|pad_bottom|int|pad_top|
217|18|pad_value|float|0.f|
218
219|weight|type|
220|--|--|
221|weight_data|float/fp16/int8|
222|bias_data|float|
223
224# crop
225```
226y = crop(x)
227```
228
229* one_blob_only
230
231|param id|name|type|default|
232|--|--|--|--|
233|0|woffset|int|0|
234|1|hoffset|int|0|
235|2|coffset|int|1|
236|3|outw|int|1|
237|4|outh|int|0|
238|5|outc|int|0|
239|6|woffset2|int|0|
240|7|hoffset2|int|1|
241|8|coffset2|int|0|
242|9|starts|array|[ ]|
243|10|ends|array|[ ]|
244|11|axes|array|[ ]|
245
246# dequantize
247```
248y = x * scale + bias
249```
250
251* one_blob_only
252* support_inplace
253
254|param id|name|type|default|
255|--|--|--|--|
256|0|scale|float|1.f|
257|1|bias_term|int|0|
258|2|bias_data_size|int|0|
259
260# lstm
261Apply a single-layer LSTM to a feature sequence of `T` timesteps. The input blob shape is `[w=input_size, h=T]` and the output blob shape is `[w=num_output, h=T]`.
262
263* one_blob_only
264
265|param id|name|type|default|description|
266|--|--|--|--|--|
267|0|num_output|int|0|hidden size of output|
268|1|weight_data_size|int|0|total size of IFOG weight matrix|
269|2|direction|int|0|0=forward, 1=reverse, 2=bidirectional|
270
271|weight|type|shape|description|
272|--|--|--|--|
273|weight_xc_data|float|`[w=input_size, h=num_output * 4, c=num_directions]`||
274|bias_c_data|float|`[w=num_output, h=4, c=num_directions]`||
275|weight_hc_data|float|`[w=num_output, h=num_output * 4, c=num_directions]`||
276
277# pooling
278
279```
280x2 = pad(x, pads)
281x3 = pooling(x2, kernel, stride)
282```
283
284| param id | name           | type | default  | description                                                                                                                         |
285| -------- | -------------- | ---- | -------- | ----------------------------------------------------------------------------------------------------------------------------------- |
286| 0        | pooling_type   | int  | 0        | 0: max 1: avg                                                                                                                       |
287| 1        | kernel_w       | int  | 0        |                                                                                                                                     |
288| 2        | stride_w       | int  | 1        |                                                                                                                                     |
289| 3        | pad_left       | int  | 0        |                                                                                                                                     |
290| 4        | global_pooling | int  | 0        |                                                                                                                                     |
291| 5        | pad_mode       | int  | 0        | 0: full padding <br/> 1: valid padding <br/> 2: tensorflow padding=SAME or onnx padding=SAME_UPPER <br/> 3: onnx padding=SAME_LOWER |
292| 11       | kernel_h       | int  | kernel_w |                                                                                                                                     |
293| 12       | stride_h       | int  | stride_w |                                                                                                                                     |
294| 13       | pad_top        | int  | pad_left |                                                                                                                                     |
295| 14       | pad_right      | int  | pad_left |                                                                                                                                     |
296| 15       | pad_bottom     | int  | pad_top  |                                                                                                                                     |
297
298# sigmoid
299```
300y = 1 / (1 + exp(-x))
301```
302
303* one_blob_only
304* support_inplace
305
306# softmax
307```
308softmax(x, axis)
309```
310
311* one_blob_only
312* support_inplace
313
314|param id|name|type|default|description|
315|--|--|--|--|--|
316|0|axis|int|0||
317|1|fixbug0|int|0|hack for bug fix, should be 1|
318
319# tanh
320```
321y = tanh(x)
322```
323
324* one_blob_only
325* support_inplace
326