1---
2layout: page_api
3title: NDArray
4permalink: /api/scala/docs/tutorials/ndarray
5is_tutorial: true
6tag: scala
7---
8<!--- Licensed to the Apache Software Foundation (ASF) under one -->
9<!--- or more contributor license agreements.  See the NOTICE file -->
10<!--- distributed with this work for additional information -->
11<!--- regarding copyright ownership.  The ASF licenses this file -->
12<!--- to you under the Apache License, Version 2.0 (the -->
13<!--- "License"); you may not use this file except in compliance -->
14<!--- with the License.  You may obtain a copy of the License at -->
15
16<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
17
18<!--- Unless required by applicable law or agreed to in writing, -->
19<!--- software distributed under the License is distributed on an -->
20<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
21<!--- KIND, either express or implied.  See the License for the -->
22<!--- specific language governing permissions and limitations -->
23<!--- under the License. -->
24
25# NDArray API
26
27
28The NDArray package (`mxnet.ndarray`) contains tensor operations similar to `numpy.ndarray`. The syntax is also similar, except for some additional calls for dealing with I/O and multiple devices.
29
30Topics:
31
32* [Create NDArray](#create-ndarray)
33* [NDArray Operations](#ndarray-operations)
34* [NDArray API Reference]({{'/api/scala/docs/api/#org.apache.mxnet.NDArray'|relative_url}})
35
36## Create NDArray
37
38Create `mxnet.ndarray` as follows:
39
40```scala
41import org.apache.mxnet._
42// all-zero array of dimension 100x50
43val a = NDArray.zeros(100, 50)
44// all-one array of dimension 256x32x128x1
45val b = NDArray.ones(256, 32, 128, 1)
46// initialize array with contents, you can specify dimensions of array using Shape parameter while creating array.
47val c = NDArray.array(Array(1, 2, 3, 4, 5, 6), shape = Shape(2, 3))
48```
49This is similar to the way you use `numpy`.
50## NDArray Operations
51
52We provide some basic ndarray operations, like arithmetic and slice operations.
53
54### Arithmetic Operations
55
56```scala
57import org.apache.mxnet._
58val a = NDArray.zeros(100, 50)
59a.shape
60// org.apache.mxnet.Shape = (100,50)
61val b = NDArray.ones(100, 50)
62// c and d will be calculated in parallel here!
63val c = a + b
64val d = a - b
65// inplace operation, b's contents will be modified, but c and d won't be affected.
66b += d
67```
68
69### Multiplication/Division Operations
70
71```scala
72import org.apache.mxnet._
73// Multiplication
74val ndones = NDArray.ones(2, 1)
75val ndtwos = ndones * 2
76ndtwos.toArray
77// Array[Float] = Array(2.0, 2.0)
78(ndones * ndones).toArray
79// Array[Float] = Array(1.0, 1.0)
80(ndtwos * ndtwos).toArray
81// Array[Float] = Array(4.0, 4.0)
82ndtwos *= ndtwos // inplace
83ndtwos.toArray
84// Array[Float] = Array(4.0, 4.0)
85
86//Division
87val ndones = NDArray.ones(2, 1)
88val ndzeros = ndones - 1f
89val ndhalves = ndones / 2
90ndhalves.toArray
91// Array[Float] = Array(0.5, 0.5)
92(ndhalves / ndhalves).toArray
93// Array[Float] = Array(1.0, 1.0)
94(ndones / ndones).toArray
95// Array[Float] = Array(1.0, 1.0)
96(ndzeros / ndones).toArray
97// Array[Float] = Array(0.0, 0.0)
98ndhalves /= ndhalves
99ndhalves.toArray
100// Array[Float] = Array(1.0, 1.0)
101```
102
103### Slice Operations
104
105```scala
106import org.apache.mxnet._
107val a = NDArray.array(Array(1f, 2f, 3f, 4f, 5f, 6f), shape = Shape(3, 2))
108val a1 = a.slice(1)
109assert(a1.shape === Shape(1, 2))
110assert(a1.toArray === Array(3f, 4f))
111
112val a2 = arr.slice(1, 3)
113assert(a2.shape === Shape(2, 2))
114assert(a2.toArray === Array(3f, 4f, 5f, 6f))
115```
116
117### Dot Product
118
119```scala
120import org.apache.mxnet._
121val arr1 = NDArray.array(Array(1f, 2f), shape = Shape(1, 2))
122val arr2 = NDArray.array(Array(3f, 4f), shape = Shape(2, 1))
123val res = NDArray.dot(arr1, arr2)
124res.shape
125// org.apache.mxnet.Shape = (1,1)
126res.toArray
127// Array[Float] = Array(11.0)
128```
129
130### Save and Load NDArray
131
132You can use MXNet functions to save and load a list or dictionary of NDArrays from file systems, as follows:
133
134```scala
135import org.apache.mxnet._
136val a = NDArray.zeros(100, 200)
137val b = NDArray.zeros(100, 200)
138// save list of NDArrays
139NDArray.save("/path/to/array/file", Array(a, b))
140// save dictionary of NDArrays to AWS S3
141NDArray.save("s3://path/to/s3/array", Map("A" -> a, "B" -> b))
142// save list of NDArrays to hdfs.
143NDArray.save("hdfs://path/to/hdfs/array", Array(a, b))
144val from_file = NDArray.load("/path/to/array/file")
145val from_s3 = NDArray.load("s3://path/to/s3/array")
146val from_hdfs = NDArray.load("hdfs://path/to/hdfs/array")
147```
148The good thing about using the `save` and `load` interface is that you can use the format across all `mxnet` language bindings. They also already support Amazon S3 and HDFS.
149
150### Multi-Device Support
151
152Device information is stored in the `mxnet.Context` structure. When creating NDArray in MXNet, you can use the context argument (the default is the CPU context) to create arrays on specific devices as follows:
153
154```scala
155import org.apache.mxnet._
156val cpu_a = NDArray.zeros(100, 200)
157cpu_a.context
158// org.apache.mxnet.Context = cpu(0)
159val ctx = Context.gpu(0)
160val gpu_b = NDArray.zeros(Shape(100, 200), ctx)
161gpu_b.context
162// org.apache.mxnet.Context = gpu(0)
163```
164
165Currently, we *do not* allow operations among arrays from different contexts. To manually enable this, use the `copyto` member function to copy the content to different devices, and continue computation:
166
167```scala
168import org.apache.mxnet._
169val x = NDArray.zeros(100, 200)
170val ctx = Context.gpu(0)
171val y = NDArray.zeros(Shape(100, 200), ctx)
172val z = x + y
173// mxnet.base.MXNetError: [13:29:12] src/ndarray/ndarray.cc:33:
174// Check failed: lhs.ctx() == rhs.ctx() operands context mismatch
175val cpu_y = NDArray.zeros(100, 200)
176y.copyto(cpu_y)
177val z = x + cpu_y
178```
179
180## Next Steps
181* See [KVStore API](kvstore) for multi-GPU and multi-host distributed training.
182