1# Applications
2
3Keras Applications are deep learning models that are made available alongside pre-trained weights.
4These models can be used for prediction, feature extraction, and fine-tuning.
5
6Weights are downloaded automatically when instantiating a model. They are stored at `~/.keras/models/`.
7
8## Available models
9
10### Models for image classification with weights trained on ImageNet:
11
12- [Xception](#xception)
13- [VGG16](#vgg16)
14- [VGG19](#vgg19)
15- [ResNet, ResNetV2](#resnet)
16- [InceptionV3](#inceptionv3)
17- [InceptionResNetV2](#inceptionresnetv2)
18- [MobileNet](#mobilenet)
19- [MobileNetV2](#mobilenetv2)
20- [DenseNet](#densenet)
21- [NASNet](#nasnet)
22
23All of these architectures are compatible with all the backends (TensorFlow, Theano, and CNTK), and upon instantiation the models will be built according to the image data format set in your Keras configuration file at `~/.keras/keras.json`. For instance, if you have set `image_data_format=channels_last`, then any model loaded from this repository will get built according to the TensorFlow data format convention, "Height-Width-Depth".
24
25Note that:
26- For `Keras < 2.2.0`, The Xception model is only available for TensorFlow, due to its reliance on `SeparableConvolution` layers.
27- For `Keras < 2.1.5`, The MobileNet model is only available for TensorFlow, due to its reliance on `DepthwiseConvolution` layers.
28
29-----
30
31## Usage examples for image classification models
32
33### Classify ImageNet classes with ResNet50
34
35```python
36from keras.applications.resnet50 import ResNet50
37from keras.preprocessing import image
38from keras.applications.resnet50 import preprocess_input, decode_predictions
39import numpy as np
40
41model = ResNet50(weights='imagenet')
42
43img_path = 'elephant.jpg'
44img = image.load_img(img_path, target_size=(224, 224))
45x = image.img_to_array(img)
46x = np.expand_dims(x, axis=0)
47x = preprocess_input(x)
48
49preds = model.predict(x)
50# decode the results into a list of tuples (class, description, probability)
51# (one such list for each sample in the batch)
52print('Predicted:', decode_predictions(preds, top=3)[0])
53# Predicted: [(u'n02504013', u'Indian_elephant', 0.82658225), (u'n01871265', u'tusker', 0.1122357), (u'n02504458', u'African_elephant', 0.061040461)]
54```
55
56### Extract features with VGG16
57
58```python
59from keras.applications.vgg16 import VGG16
60from keras.preprocessing import image
61from keras.applications.vgg16 import preprocess_input
62import numpy as np
63
64model = VGG16(weights='imagenet', include_top=False)
65
66img_path = 'elephant.jpg'
67img = image.load_img(img_path, target_size=(224, 224))
68x = image.img_to_array(img)
69x = np.expand_dims(x, axis=0)
70x = preprocess_input(x)
71
72features = model.predict(x)
73```
74
75### Extract features from an arbitrary intermediate layer with VGG19
76
77```python
78from keras.applications.vgg19 import VGG19
79from keras.preprocessing import image
80from keras.applications.vgg19 import preprocess_input
81from keras.models import Model
82import numpy as np
83
84base_model = VGG19(weights='imagenet')
85model = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_pool').output)
86
87img_path = 'elephant.jpg'
88img = image.load_img(img_path, target_size=(224, 224))
89x = image.img_to_array(img)
90x = np.expand_dims(x, axis=0)
91x = preprocess_input(x)
92
93block4_pool_features = model.predict(x)
94```
95
96### Fine-tune InceptionV3 on a new set of classes
97
98```python
99from keras.applications.inception_v3 import InceptionV3
100from keras.preprocessing import image
101from keras.models import Model
102from keras.layers import Dense, GlobalAveragePooling2D
103from keras import backend as K
104
105# create the base pre-trained model
106base_model = InceptionV3(weights='imagenet', include_top=False)
107
108# add a global spatial average pooling layer
109x = base_model.output
110x = GlobalAveragePooling2D()(x)
111# let's add a fully-connected layer
112x = Dense(1024, activation='relu')(x)
113# and a logistic layer -- let's say we have 200 classes
114predictions = Dense(200, activation='softmax')(x)
115
116# this is the model we will train
117model = Model(inputs=base_model.input, outputs=predictions)
118
119# first: train only the top layers (which were randomly initialized)
120# i.e. freeze all convolutional InceptionV3 layers
121for layer in base_model.layers:
122    layer.trainable = False
123
124# compile the model (should be done *after* setting layers to non-trainable)
125model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
126
127# train the model on the new data for a few epochs
128model.fit_generator(...)
129
130# at this point, the top layers are well trained and we can start fine-tuning
131# convolutional layers from inception V3. We will freeze the bottom N layers
132# and train the remaining top layers.
133
134# let's visualize layer names and layer indices to see how many layers
135# we should freeze:
136for i, layer in enumerate(base_model.layers):
137   print(i, layer.name)
138
139# we chose to train the top 2 inception blocks, i.e. we will freeze
140# the first 249 layers and unfreeze the rest:
141for layer in model.layers[:249]:
142   layer.trainable = False
143for layer in model.layers[249:]:
144   layer.trainable = True
145
146# we need to recompile the model for these modifications to take effect
147# we use SGD with a low learning rate
148from keras.optimizers import SGD
149model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy')
150
151# we train our model again (this time fine-tuning the top 2 inception blocks
152# alongside the top Dense layers
153model.fit_generator(...)
154```
155
156
157### Build InceptionV3 over a custom input tensor
158
159```python
160from keras.applications.inception_v3 import InceptionV3
161from keras.layers import Input
162
163# this could also be the output a different Keras model or layer
164input_tensor = Input(shape=(224, 224, 3))  # this assumes K.image_data_format() == 'channels_last'
165
166model = InceptionV3(input_tensor=input_tensor, weights='imagenet', include_top=True)
167```
168
169-----
170
171# Documentation for individual models
172
173| Model | Size | Top-1 Accuracy | Top-5 Accuracy | Parameters | Depth |
174| ----- | ----: | --------------: | --------------: | ----------: | -----: |
175| [Xception](#xception) | 88 MB | 0.790 | 0.945 | 22,910,480 | 126 |
176| [VGG16](#vgg16) | 528 MB | 0.713 | 0.901 | 138,357,544 | 23 |
177| [VGG19](#vgg19) | 549 MB | 0.713 | 0.900 | 143,667,240 | 26 |
178| [ResNet50](#resnet) | 98 MB | 0.749 | 0.921 | 25,636,712 | - |
179| [ResNet101](#resnet) | 171 MB | 0.764 | 0.928 | 44,707,176 | - |
180| [ResNet152](#resnet) | 232 MB | 0.766 | 0.931 | 60,419,944 | - |
181| [ResNet50V2](#resnet) | 98 MB | 0.760 | 0.930 | 25,613,800 | - |
182| [ResNet101V2](#resnet) | 171 MB | 0.772 | 0.938 | 44,675,560 | - |
183| [ResNet152V2](#resnet) | 232 MB | 0.780 | 0.942 | 60,380,648 | - |
184| [InceptionV3](#inceptionv3) | 92 MB | 0.779 | 0.937 | 23,851,784 | 159 |
185| [InceptionResNetV2](#inceptionresnetv2) | 215 MB | 0.803 | 0.953 | 55,873,736 | 572 |
186| [MobileNet](#mobilenet) | 16 MB | 0.704 | 0.895 | 4,253,864 | 88 |
187| [MobileNetV2](#mobilenetv2) | 14 MB | 0.713 | 0.901 | 3,538,984 | 88 |
188| [DenseNet121](#densenet) | 33 MB | 0.750 | 0.923 | 8,062,504 | 121 |
189| [DenseNet169](#densenet) | 57 MB | 0.762 | 0.932 | 14,307,880 | 169 |
190| [DenseNet201](#densenet) | 80 MB | 0.773 | 0.936 | 20,242,984 | 201 |
191| [NASNetMobile](#nasnet) | 23 MB | 0.744 | 0.919 | 5,326,716 | - |
192| [NASNetLarge](#nasnet) | 343 MB | 0.825 | 0.960 | 88,949,818 | - |
193
194The top-1 and top-5 accuracy refers to the model's performance on the ImageNet validation dataset.
195
196Depth refers to the topological depth of the network. This includes activation layers, batch normalization layers etc.
197
198-----
199
200
201## Xception
202
203
204```python
205keras.applications.xception.Xception(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
206```
207
208Xception V1 model, with weights pre-trained on ImageNet.
209
210On ImageNet, this model gets to a top-1 validation accuracy of 0.790
211and a top-5 validation accuracy of 0.945.
212
213This model and can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
214
215The default input size for this model is 299x299.
216
217### Arguments
218
219- include_top: whether to include the fully-connected layer at the top of the network.
220- weights: one of `None` (random initialization) or `'imagenet'` (pre-training on ImageNet).
221- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
222- input_shape: optional shape tuple, only to be specified
223    if `include_top` is `False` (otherwise the input shape
224    has to be `(299, 299, 3)`.
225    It should have exactly 3 inputs channels,
226    and width and height should be no smaller than 71.
227    E.g. `(150, 150, 3)` would be one valid value.
228- pooling: Optional pooling mode for feature extraction
229    when `include_top` is `False`.
230    - `None` means that the output of the model will be
231        the 4D tensor output of the
232        last convolutional block.
233    - `'avg'` means that global average pooling
234        will be applied to the output of the
235        last convolutional block, and thus
236        the output of the model will be a 2D tensor.
237    - `'max'` means that global max pooling will
238        be applied.
239- classes: optional number of classes to classify images
240    into, only to be specified if `include_top` is `True`, and
241    if no `weights` argument is specified.
242
243### Returns
244
245A Keras `Model` instance.
246
247### References
248
249- [Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/abs/1610.02357)
250
251### License
252
253These weights are trained by ourselves and are released under the MIT license.
254
255
256-----
257
258
259## VGG16
260
261```python
262keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
263```
264
265VGG16 model, with weights pre-trained on ImageNet.
266
267This model can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
268
269The default input size for this model is 224x224.
270
271### Arguments
272
273- include_top: whether to include the 3 fully-connected layers at the top of the network.
274- weights: one of `None` (random initialization) or `'imagenet'` (pre-training on ImageNet).
275- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
276- input_shape: optional shape tuple, only to be specified
277    if `include_top` is `False` (otherwise the input shape
278    has to be `(224, 224, 3)` (with `'channels_last'` data format)
279    or `(3, 224, 224)` (with `'channels_first'` data format).
280    It should have exactly 3 inputs channels,
281    and width and height should be no smaller than 32.
282    E.g. `(200, 200, 3)` would be one valid value.
283- pooling: Optional pooling mode for feature extraction
284    when `include_top` is `False`.
285    - `None` means that the output of the model will be
286        the 4D tensor output of the
287        last convolutional block.
288    - `'avg'` means that global average pooling
289        will be applied to the output of the
290        last convolutional block, and thus
291        the output of the model will be a 2D tensor.
292    - `'max'` means that global max pooling will
293        be applied.
294- classes: optional number of classes to classify images
295    into, only to be specified if `include_top` is `True`, and
296    if no `weights` argument is specified.
297
298### Returns
299
300A Keras `Model` instance.
301
302### References
303
304- [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556): please cite this paper if you use the VGG models in your work.
305
306### License
307
308These weights are ported from the ones [released by VGG at Oxford](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) under the [Creative Commons Attribution License](https://creativecommons.org/licenses/by/4.0/).
309
310-----
311
312## VGG19
313
314
315```python
316keras.applications.vgg19.VGG19(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
317```
318
319
320VGG19 model, with weights pre-trained on ImageNet.
321
322This model can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
323
324The default input size for this model is 224x224.
325
326### Arguments
327
328- include_top: whether to include the 3 fully-connected layers at the top of the network.
329- weights: one of `None` (random initialization) or `'imagenet'` (pre-training on ImageNet).
330- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
331- input_shape: optional shape tuple, only to be specified
332    if `include_top` is `False` (otherwise the input shape
333    has to be `(224, 224, 3)` (with `'channels_last'` data format)
334    or `(3, 224, 224)` (with `'channels_first'` data format).
335    It should have exactly 3 inputs channels,
336    and width and height should be no smaller than 32.
337    E.g. `(200, 200, 3)` would be one valid value.
338- pooling: Optional pooling mode for feature extraction
339    when `include_top` is `False`.
340    - `None` means that the output of the model will be
341        the 4D tensor output of the
342        last convolutional block.
343    - `'avg'` means that global average pooling
344        will be applied to the output of the
345        last convolutional block, and thus
346        the output of the model will be a 2D tensor.
347    - `'max'` means that global max pooling will
348        be applied.
349- classes: optional number of classes to classify images
350    into, only to be specified if `include_top` is `True`, and
351    if no `weights` argument is specified.
352
353### Returns
354
355A Keras `Model` instance.
356
357
358### References
359
360- [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)
361
362### License
363
364These weights are ported from the ones [released by VGG at Oxford](http://www.robots.ox.ac.uk/~vgg/research/very_deep/) under the [Creative Commons Attribution License](https://creativecommons.org/licenses/by/4.0/).
365
366-----
367
368## ResNet
369
370
371```python
372keras.applications.resnet.ResNet50(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
373keras.applications.resnet.ResNet101(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
374keras.applications.resnet.ResNet152(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
375keras.applications.resnet_v2.ResNet50V2(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
376keras.applications.resnet_v2.ResNet101V2(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
377keras.applications.resnet_v2.ResNet152V2(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
378```
379
380
381ResNet, ResNetV2 models, with weights pre-trained on ImageNet.
382
383This model and can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
384
385The default input size for this model is 224x224.
386
387
388### Arguments
389
390- include_top: whether to include the fully-connected layer at the top of the network.
391- weights: one of `None` (random initialization) or `'imagenet'` (pre-training on ImageNet).
392- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
393- input_shape: optional shape tuple, only to be specified
394    if `include_top` is `False` (otherwise the input shape
395    has to be `(224, 224, 3)` (with `'channels_last'` data format)
396    or `(3, 224, 224)` (with `'channels_first'` data format).
397    It should have exactly 3 inputs channels,
398    and width and height should be no smaller than 32.
399    E.g. `(200, 200, 3)` would be one valid value.
400- pooling: Optional pooling mode for feature extraction
401    when `include_top` is `False`.
402    - `None` means that the output of the model will be
403        the 4D tensor output of the
404        last convolutional block.
405    - `'avg'` means that global average pooling
406        will be applied to the output of the
407        last convolutional block, and thus
408        the output of the model will be a 2D tensor.
409    - `'max'` means that global max pooling will
410        be applied.
411- classes: optional number of classes to classify images
412    into, only to be specified if `include_top` is `True`, and
413    if no `weights` argument is specified.
414
415### Returns
416
417A Keras `Model` instance.
418
419### References
420
421- `ResNet`: [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
422- `ResNetV2`: [Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)
423
424### License
425
426These weights are ported from the following:
427
428- `ResNet`: [The original repository of Kaiming He](https://github.com/KaimingHe/deep-residual-networks) under the [MIT license](https://github.com/KaimingHe/deep-residual-networks/blob/master/LICENSE).
429- `ResNetV2`: [Facebook](https://github.com/facebook/fb.resnet.torch) under the [BSD license](https://github.com/facebook/fb.resnet.torch/blob/master/LICENSE).
430
431-----
432
433## InceptionV3
434
435
436```python
437keras.applications.inception_v3.InceptionV3(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
438```
439
440Inception V3 model, with weights pre-trained on ImageNet.
441
442This model and can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
443
444The default input size for this model is 299x299.
445
446
447### Arguments
448
449- include_top: whether to include the fully-connected layer at the top of the network.
450- weights: one of `None` (random initialization) or `'imagenet'` (pre-training on ImageNet).
451- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
452- input_shape: optional shape tuple, only to be specified
453    if `include_top` is `False` (otherwise the input shape
454    has to be `(299, 299, 3)` (with `'channels_last'` data format)
455    or `(3, 299, 299)` (with `'channels_first'` data format).
456    It should have exactly 3 inputs channels,
457    and width and height should be no smaller than 75.
458    E.g. `(150, 150, 3)` would be one valid value.
459- pooling: Optional pooling mode for feature extraction
460    when `include_top` is `False`.
461    - `None` means that the output of the model will be
462        the 4D tensor output of the
463        last convolutional block.
464    - `'avg'` means that global average pooling
465        will be applied to the output of the
466        last convolutional block, and thus
467        the output of the model will be a 2D tensor.
468    - `'max'` means that global max pooling will
469        be applied.
470- classes: optional number of classes to classify images
471    into, only to be specified if `include_top` is `True`, and
472    if no `weights` argument is specified.
473
474### Returns
475
476A Keras `Model` instance.
477
478### References
479
480- [Rethinking the Inception Architecture for Computer Vision](http://arxiv.org/abs/1512.00567)
481
482### License
483
484These weights are released under [the Apache License](https://github.com/tensorflow/models/blob/master/LICENSE).
485
486-----
487
488## InceptionResNetV2
489
490
491```python
492keras.applications.inception_resnet_v2.InceptionResNetV2(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
493```
494
495Inception-ResNet V2 model, with weights pre-trained on ImageNet.
496
497This model and can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
498
499The default input size for this model is 299x299.
500
501
502### Arguments
503
504- include_top: whether to include the fully-connected layer at the top of the network.
505- weights: one of `None` (random initialization) or `'imagenet'` (pre-training on ImageNet).
506- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`) to use as image input for the model.
507- input_shape: optional shape tuple, only to be specified
508    if `include_top` is `False` (otherwise the input shape
509    has to be `(299, 299, 3)` (with `'channels_last'` data format)
510    or `(3, 299, 299)` (with `'channels_first'` data format).
511    It should have exactly 3 inputs channels,
512    and width and height should be no smaller than 75.
513    E.g. `(150, 150, 3)` would be one valid value.
514- pooling: Optional pooling mode for feature extraction
515    when `include_top` is `False`.
516    - `None` means that the output of the model will be
517        the 4D tensor output of the
518        last convolutional block.
519    - `'avg'` means that global average pooling
520        will be applied to the output of the
521        last convolutional block, and thus
522        the output of the model will be a 2D tensor.
523    - `'max'` means that global max pooling will
524        be applied.
525- classes: optional number of classes to classify images
526    into, only to be specified if `include_top` is `True`, and
527    if no `weights` argument is specified.
528
529### Returns
530
531A Keras `Model` instance.
532
533### References
534
535- [Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning](https://arxiv.org/abs/1602.07261)
536
537### License
538
539These weights are released under [the Apache License](https://github.com/tensorflow/models/blob/master/LICENSE).
540
541-----
542
543## MobileNet
544
545
546```python
547keras.applications.mobilenet.MobileNet(input_shape=None, alpha=1.0, depth_multiplier=1, dropout=1e-3, include_top=True, weights='imagenet', input_tensor=None, pooling=None, classes=1000)
548```
549
550MobileNet model, with weights pre-trained on ImageNet.
551
552This model and can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
553
554The default input size for this model is 224x224.
555
556### Arguments
557
558- input_shape: optional shape tuple, only to be specified
559    if `include_top` is `False` (otherwise the input shape
560    has to be `(224, 224, 3)` (with `'channels_last'` data format)
561    or `(3, 224, 224)` (with `'channels_first'` data format).
562    It should have exactly 3 inputs channels,
563    and width and height should be no smaller than 32.
564    E.g. `(200, 200, 3)` would be one valid value.
565- alpha: controls the width of the network.
566    - If `alpha` < 1.0, proportionally decreases the number
567        of filters in each layer.
568    - If `alpha` > 1.0, proportionally increases the number
569        of filters in each layer.
570    - If `alpha` = 1, default number of filters from the paper
571        are used at each layer.
572- depth_multiplier: depth multiplier for depthwise convolution
573    (also called the resolution multiplier)
574- dropout: dropout rate
575- include_top: whether to include the fully-connected
576    layer at the top of the network.
577- weights: `None` (random initialization) or
578    `'imagenet'` (ImageNet weights)
579- input_tensor: optional Keras tensor (i.e. output of
580    `layers.Input()`)
581    to use as image input for the model.
582- pooling: Optional pooling mode for feature extraction
583    when `include_top` is `False`.
584    - `None` means that the output of the model
585    will be the 4D tensor output of the
586        last convolutional block.
587    - `'avg'` means that global average pooling
588        will be applied to the output of the
589        last convolutional block, and thus
590        the output of the model will be a
591        2D tensor.
592    - `'max'` means that global max pooling will
593        be applied.
594- classes: optional number of classes to classify images
595    into, only to be specified if `include_top` is `True`, and
596    if no `weights` argument is specified.
597
598### Returns
599
600A Keras `Model` instance.
601
602### References
603
604- [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/pdf/1704.04861.pdf)
605
606### License
607
608These weights are released under [the Apache License](https://github.com/tensorflow/models/blob/master/LICENSE).
609
610-----
611
612## DenseNet
613
614
615```python
616keras.applications.densenet.DenseNet121(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
617keras.applications.densenet.DenseNet169(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
618keras.applications.densenet.DenseNet201(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000)
619```
620
621DenseNet models, with weights pre-trained on ImageNet.
622
623This model and can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
624
625The default input size for this model is 224x224.
626
627### Arguments
628
629- blocks: numbers of building blocks for the four dense layers.
630- include_top: whether to include the fully-connected
631    layer at the top of the network.
632- weights: one of `None` (random initialization),
633    'imagenet' (pre-training on ImageNet),
634    or the path to the weights file to be loaded.
635- input_tensor: optional Keras tensor (i.e. output of `layers.Input()`)
636    to use as image input for the model.
637- input_shape: optional shape tuple, only to be specified
638    if `include_top` is False (otherwise the input shape
639    has to be `(224, 224, 3)` (with `'channels_last'` data format)
640    or `(3, 224, 224)` (with `'channels_first'` data format).
641    It should have exactly 3 inputs channels,
642    and width and height should be no smaller than 32.
643    E.g. `(200, 200, 3)` would be one valid value.
644- pooling: optional pooling mode for feature extraction
645    when `include_top` is `False`.
646    - `None` means that the output of the model will be
647        the 4D tensor output of the
648        last convolutional block.
649    - `avg` means that global average pooling
650        will be applied to the output of the
651        last convolutional block, and thus
652        the output of the model will be a 2D tensor.
653    - `max` means that global max pooling will
654        be applied.
655- classes: optional number of classes to classify images
656    into, only to be specified if `include_top` is True, and
657    if no `weights` argument is specified.
658
659### Returns
660
661A Keras model instance.
662
663### References
664
665- [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) (CVPR 2017 Best Paper Award)
666
667### License
668
669These weights are released under [the BSD 3-clause License](https://github.com/liuzhuang13/DenseNet/blob/master/LICENSE).
670
671-----
672
673## NASNet
674
675
676```python
677keras.applications.nasnet.NASNetLarge(input_shape=None, include_top=True, weights='imagenet', input_tensor=None, pooling=None, classes=1000)
678keras.applications.nasnet.NASNetMobile(input_shape=None, include_top=True, weights='imagenet', input_tensor=None, pooling=None, classes=1000)
679```
680
681Neural Architecture Search Network (NASNet) models, with weights pre-trained on ImageNet.
682
683The default input size for the NASNetLarge model is 331x331 and for the
684NASNetMobile model is 224x224.
685
686### Arguments
687
688- input_shape: optional shape tuple, only to be specified
689    if `include_top` is `False` (otherwise the input shape
690    has to be `(224, 224, 3)` (with `'channels_last'` data format)
691    or `(3, 224, 224)` (with `'channels_first'` data format)
692    for NASNetMobile or `(331, 331, 3)` (with `'channels_last'`
693    data format) or `(3, 331, 331)` (with `'channels_first'`
694    data format) for NASNetLarge.
695    It should have exactly 3 inputs channels,
696    and width and height should be no smaller than 32.
697    E.g. `(200, 200, 3)` would be one valid value.
698- include_top: whether to include the fully-connected
699    layer at the top of the network.
700- weights: `None` (random initialization) or
701    `'imagenet'` (ImageNet weights)
702- input_tensor: optional Keras tensor (i.e. output of
703    `layers.Input()`)
704    to use as image input for the model.
705- pooling: Optional pooling mode for feature extraction
706    when `include_top` is `False`.
707    - `None` means that the output of the model
708    will be the 4D tensor output of the
709        last convolutional block.
710    - `'avg'` means that global average pooling
711        will be applied to the output of the
712        last convolutional block, and thus
713        the output of the model will be a
714        2D tensor.
715    - `'max'` means that global max pooling will
716        be applied.
717- classes: optional number of classes to classify images
718    into, only to be specified if `include_top` is `True`, and
719    if no `weights` argument is specified.
720
721### Returns
722
723A Keras `Model` instance.
724
725### References
726
727- [Learning Transferable Architectures for Scalable Image Recognition](https://arxiv.org/abs/1707.07012)
728
729### License
730
731These weights are released under [the Apache License](https://github.com/tensorflow/models/blob/master/LICENSE).
732
733-----
734
735## MobileNetV2
736
737
738```python
739keras.applications.mobilenet_v2.MobileNetV2(input_shape=None, alpha=1.0, include_top=True, weights='imagenet', input_tensor=None, pooling=None, classes=1000)
740```
741
742MobileNetV2 model, with weights pre-trained on ImageNet.
743
744This model and can be built both with `'channels_first'` data format (channels, height, width) or `'channels_last'` data format (height, width, channels).
745
746The default input size for this model is 224x224.
747
748### Arguments
749
750- input_shape: optional shape tuple, to be specified if you would
751    like to use a model with an input img resolution that is not
752    (224, 224, 3).
753    It should have exactly 3 inputs channels (224, 224, 3).
754    You can also omit this option if you would like
755    to infer input_shape from an input_tensor.
756    If you choose to include both input_tensor and input_shape then
757    input_shape will be used if they match, if the shapes
758    do not match then we will throw an error.
759    E.g. `(160, 160, 3)` would be one valid value.
760- alpha: controls the width of the network. This is known as the
761    width multiplier in the MobileNetV2 paper.
762    - If `alpha` < 1.0, proportionally decreases the number
763        of filters in each layer.
764    - If `alpha` > 1.0, proportionally increases the number
765        of filters in each layer.
766    - If `alpha` = 1, default number of filters from the paper
767         are used at each layer.
768- include_top: whether to include the fully-connected
769      layer at the top of the network.
770- weights: one of `None` (random initialization),
771        'imagenet' (pre-training on ImageNet),
772        or the path to the weights file to be loaded.
773- input_tensor: optional Keras tensor (i.e. output of
774      `layers.Input()`)
775      to use as image input for the model.
776- pooling: Optional pooling mode for feature extraction
777    when `include_top` is `False`.
778    - `None` means that the output of the model
779    will be the 4D tensor output of the
780        last convolutional block.
781    - `'avg'` means that global average pooling
782        will be applied to the output of the
783        last convolutional block, and thus
784        the output of the model will be a
785        2D tensor.
786    - `'max'` means that global max pooling will
787        be applied.
788- classes: optional number of classes to classify images
789      into, only to be specified if `include_top` is True, and
790      if no `weights` argument is specified.
791
792### Returns
793
794A Keras model instance.
795
796### Raises
797
798ValueError: in case of invalid argument for `weights`,
799    or invalid input shape, alpha,
800    rows when weights='imagenet'
801
802### References
803
804- [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
805
806### License
807
808These weights are released under [the Apache License](https://github.com/tensorflow/models/blob/master/LICENSE).
809