1---
2layout: global
3title: Monitoring and Instrumentation
4description: Monitoring, metrics, and instrumentation guide for Spark SPARK_VERSION_SHORT
5---
6
7There are several ways to monitor Spark applications: web UIs, metrics, and external instrumentation.
8
9# Web Interfaces
10
11Every SparkContext launches a web UI, by default on port 4040, that
12displays useful information about the application. This includes:
13
14* A list of scheduler stages and tasks
15* A summary of RDD sizes and memory usage
16* Environmental information.
17* Information about the running executors
18
19You can access this interface by simply opening `http://<driver-node>:4040` in a web browser.
20If multiple SparkContexts are running on the same host, they will bind to successive ports
21beginning with 4040 (4041, 4042, etc).
22
23Note that this information is only available for the duration of the application by default.
24To view the web UI after the fact, set `spark.eventLog.enabled` to true before starting the
25application. This configures Spark to log Spark events that encode the information displayed
26in the UI to persisted storage.
27
28## Viewing After the Fact
29
30It is still possible to construct the UI of an application through Spark's history server,
31provided that the application's event logs exist.
32You can start the history server by executing:
33
34    ./sbin/start-history-server.sh
35
36This creates a web interface at `http://<server-url>:18080` by default, listing incomplete
37and completed applications and attempts.
38
39When using the file-system provider class (see `spark.history.provider` below), the base logging
40directory must be supplied in the `spark.history.fs.logDirectory` configuration option,
41and should contain sub-directories that each represents an application's event logs.
42
43The spark jobs themselves must be configured to log events, and to log them to the same shared,
44writable directory. For example, if the server was configured with a log directory of
45`hdfs://namenode/shared/spark-logs`, then the client-side options would be:
46
47    spark.eventLog.enabled true
48    spark.eventLog.dir hdfs://namenode/shared/spark-logs
49
50The history server can be configured as follows:
51
52### Environment Variables
53
54<table class="table">
55  <tr><th style="width:21%">Environment Variable</th><th>Meaning</th></tr>
56  <tr>
57    <td><code>SPARK_DAEMON_MEMORY</code></td>
58    <td>Memory to allocate to the history server (default: 1g).</td>
59  </tr>
60  <tr>
61    <td><code>SPARK_DAEMON_JAVA_OPTS</code></td>
62    <td>JVM options for the history server (default: none).</td>
63  </tr>
64  <tr>
65    <td><code>SPARK_PUBLIC_DNS</code></td>
66    <td>
67      The public address for the history server. If this is not set, links to application history
68      may use the internal address of the server, resulting in broken links (default: none).
69    </td>
70  </tr>
71  <tr>
72    <td><code>SPARK_HISTORY_OPTS</code></td>
73    <td>
74      <code>spark.history.*</code> configuration options for the history server (default: none).
75    </td>
76  </tr>
77</table>
78
79### Spark configuration options
80
81<table class="table">
82  <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
83  <tr>
84    <td>spark.history.provider</td>
85    <td><code>org.apache.spark.deploy.history.FsHistoryProvider</code></td>
86    <td>Name of the class implementing the application history backend. Currently there is only
87    one implementation, provided by Spark, which looks for application logs stored in the
88    file system.</td>
89  </tr>
90  <tr>
91    <td>spark.history.fs.logDirectory</td>
92    <td>file:/tmp/spark-events</td>
93    <td>
94    For the filesystem history provider, the URL to the directory containing application event
95    logs to load. This can be a local <code>file://</code> path,
96    an HDFS path <code>hdfs://namenode/shared/spark-logs</code>
97    or that of an alternative filesystem supported by the Hadoop APIs.
98    </td>
99  </tr>
100  <tr>
101    <td>spark.history.fs.update.interval</td>
102    <td>10s</td>
103    <td>
104      The period at which the filesystem history provider checks for new or
105      updated logs in the log directory. A shorter interval detects new applications faster,
106      at the expense of more server load re-reading updated applications.
107      As soon as an update has completed, listings of the completed and incomplete applications
108      will reflect the changes.
109    </td>
110  </tr>
111  <tr>
112    <td>spark.history.retainedApplications</td>
113    <td>50</td>
114    <td>
115      The number of applications to retain UI data for in the cache. If this cap is exceeded, then
116      the oldest applications will be removed from the cache. If an application is not in the cache,
117      it will have to be loaded from disk if its accessed from the UI.
118    </td>
119  </tr>
120  <tr>
121    <td>spark.history.ui.maxApplications</td>
122    <td>Int.MaxValue</td>
123    <td>
124      The number of applications to display on the history summary page. Application UIs are still
125      available by accessing their URLs directly even if they are not displayed on the history summary page.
126    </td>
127  </tr>
128  <tr>
129    <td>spark.history.ui.port</td>
130    <td>18080</td>
131    <td>
132      The port to which the web interface of the history server binds.
133    </td>
134  </tr>
135  <tr>
136    <td>spark.history.kerberos.enabled</td>
137    <td>false</td>
138    <td>
139      Indicates whether the history server should use kerberos to login. This is required
140      if the history server is accessing HDFS files on a secure Hadoop cluster. If this is
141      true, it uses the configs <code>spark.history.kerberos.principal</code> and
142      <code>spark.history.kerberos.keytab</code>.
143    </td>
144  </tr>
145  <tr>
146    <td>spark.history.kerberos.principal</td>
147    <td>(none)</td>
148    <td>
149      Kerberos principal name for the History Server.
150    </td>
151  </tr>
152  <tr>
153    <td>spark.history.kerberos.keytab</td>
154    <td>(none)</td>
155    <td>
156      Location of the kerberos keytab file for the History Server.
157    </td>
158  </tr>
159  <tr>
160    <td>spark.history.ui.acls.enable</td>
161    <td>false</td>
162    <td>
163      Specifies whether acls should be checked to authorize users viewing the applications.
164      If enabled, access control checks are made regardless of what the individual application had
165      set for <code>spark.ui.acls.enable</code> when the application was run. The application owner
166      will always have authorization to view their own application and any users specified via
167      <code>spark.ui.view.acls</code> and groups specified via <code>spark.ui.view.acls.groups</code>
168      when the application was run will also have authorization to view that application.
169      If disabled, no access control checks are made.
170    </td>
171  </tr>
172  <tr>
173    <td>spark.history.ui.admin.acls</td>
174    <td>empty</td>
175    <td>
176      Comma separated list of users/administrators that have view access to all the Spark applications in
177      history server. By default only the users permitted to view the application at run-time could
178      access the related application history, with this, configured users/administrators could also
179      have the permission to access it.
180      Putting a "*" in the list means any user can have the privilege of admin.
181    </td>
182  </tr>
183  <tr>
184    <td>spark.history.ui.admin.acls.groups</td>
185    <td>empty</td>
186    <td>
187      Comma separated list of groups that have view access to all the Spark applications in
188      history server. By default only the groups permitted to view the application at run-time could
189      access the related application history, with this, configured groups could also
190      have the permission to access it.
191      Putting a "*" in the list means any group can have the privilege of admin.
192    </td>
193  </tr>
194  <tr>
195    <td>spark.history.fs.cleaner.enabled</td>
196    <td>false</td>
197    <td>
198      Specifies whether the History Server should periodically clean up event logs from storage.
199    </td>
200  </tr>
201  <tr>
202    <td>spark.history.fs.cleaner.interval</td>
203    <td>1d</td>
204    <td>
205      How often the filesystem job history cleaner checks for files to delete.
206      Files are only deleted if they are older than <code>spark.history.fs.cleaner.maxAge</code>
207    </td>
208  </tr>
209  <tr>
210    <td>spark.history.fs.cleaner.maxAge</td>
211    <td>7d</td>
212    <td>
213      Job history files older than this will be deleted when the filesystem history cleaner runs.
214    </td>
215  </tr>
216  <tr>
217    <td>spark.history.fs.numReplayThreads</td>
218    <td>25% of available cores</td>
219    <td>
220      Number of threads that will be used by history server to process event logs.
221    </td>
222  </tr>
223</table>
224
225Note that in all of these UIs, the tables are sortable by clicking their headers,
226making it easy to identify slow tasks, data skew, etc.
227
228Note
229
2301. The history server displays both completed and incomplete Spark jobs. If an application makes
231multiple attempts after failures, the failed attempts will be displayed, as well as any ongoing
232incomplete attempt or the final successful attempt.
233
2342. Incomplete applications are only updated intermittently. The time between updates is defined
235by the interval between checks for changed files (`spark.history.fs.update.interval`).
236On larger clusters the update interval may be set to large values.
237The way to view a running application is actually to view its own web UI.
238
2393. Applications which exited without registering themselves as completed will be listed
240as incomplete —even though they are no longer running. This can happen if an application
241crashes.
242
2432. One way to signal the completion of a Spark job is to stop the Spark Context
244explicitly (`sc.stop()`), or in Python using the `with SparkContext() as sc:` construct
245to handle the Spark Context setup and tear down.
246
247
248## REST API
249
250In addition to viewing the metrics in the UI, they are also available as JSON.  This gives developers
251an easy way to create new visualizations and monitoring tools for Spark.  The JSON is available for
252both running applications, and in the history server.  The endpoints are mounted at `/api/v1`.  Eg.,
253for the history server, they would typically be accessible at `http://<server-url>:18080/api/v1`, and
254for a running application, at `http://localhost:4040/api/v1`.
255
256In the API, an application is referenced by its application ID, `[app-id]`.
257When running on YARN, each application may have multiple attempts, but there are attempt IDs
258only for applications in cluster mode, not applications in client mode. Applications in YARN cluster mode
259can be identified by their `[attempt-id]`. In the API listed below, when running in YARN cluster mode,
260`[app-id]` will actually be `[base-app-id]/[attempt-id]`, where `[base-app-id]` is the YARN application ID.
261
262<table class="table">
263  <tr><th>Endpoint</th><th>Meaning</th></tr>
264  <tr>
265    <td><code>/applications</code></td>
266    <td>A list of all applications.
267    <br>
268    <code>?status=[completed|running]</code> list only applications in the chosen state.
269    <br>
270    <code>?minDate=[date]</code> earliest date/time to list.
271    <br>Examples:
272    <br><code>?minDate=2015-02-10</code>
273    <br><code>?minDate=2015-02-03T16:42:40.000GMT</code>
274    <br><code>?maxDate=[date]</code> latest date/time to list; uses same format as <code>minDate</code>.
275    <br><code>?limit=[limit]</code> limits the number of applications listed.</td>
276  </tr>
277  <tr>
278    <td><code>/applications/[app-id]/jobs</code></td>
279    <td>
280      A list of all jobs for a given application.
281      <br><code>?status=[running|succeeded|failed|unknown]</code> list only jobs in the specific state.
282    </td>
283  </tr>
284  <tr>
285    <td><code>/applications/[app-id]/jobs/[job-id]</code></td>
286    <td>Details for the given job.</td>
287  </tr>
288  <tr>
289    <td><code>/applications/[app-id]/stages</code></td>
290    <td>A list of all stages for a given application.</td>
291    <br><code>?status=[active|complete|pending|failed]</code> list only stages in the state.
292  </tr>
293  <tr>
294    <td><code>/applications/[app-id]/stages/[stage-id]</code></td>
295    <td>
296      A list of all attempts for the given stage.
297    </td>
298  </tr>
299  <tr>
300    <td><code>/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]</code></td>
301    <td>Details for the given stage attempt</td>
302  </tr>
303  <tr>
304    <td><code>/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]/taskSummary</code></td>
305    <td>
306      Summary metrics of all tasks in the given stage attempt.
307      <br><code>?quantiles</code> summarize the metrics with the given quantiles.
308      <br>Example: <code>?quantiles=0.01,0.5,0.99</code>
309    </td>
310  </tr>
311  <tr>
312    <td><code>/applications/[app-id]/stages/[stage-id]/[stage-attempt-id]/taskList</code></td>
313    <td>
314       A list of all tasks for the given stage attempt.
315      <br><code>?offset=[offset]&amp;length=[len]</code> list tasks in the given range.
316      <br><code>?sortBy=[runtime|-runtime]</code> sort the tasks.
317      <br>Example: <code>?offset=10&amp;length=50&amp;sortBy=runtime</code>
318    </td>
319  </tr>
320  <tr>
321    <td><code>/applications/[app-id]/executors</code></td>
322    <td>A list of all active executors for the given application.</td>
323  </tr>
324  <tr>
325    <td><code>/applications/[app-id]/allexecutors</code></td>
326    <td>A list of all(active and dead) executors for the given application.</td>
327  </tr>
328  <tr>
329    <td><code>/applications/[app-id]/storage/rdd</code></td>
330    <td>A list of stored RDDs for the given application.</td>
331  </tr>
332  <tr>
333    <td><code>/applications/[app-id]/storage/rdd/[rdd-id]</code></td>
334    <td>Details for the storage status of a given RDD.</td>
335  </tr>
336  <tr>
337    <td><code>/applications/[base-app-id]/logs</code></td>
338    <td>Download the event logs for all attempts of the given application as files within
339    a zip file.
340    </td>
341  </tr>
342  <tr>
343    <td><code>/applications/[base-app-id]/[attempt-id]/logs</code></td>
344    <td>Download the event logs for a specific application attempt as a zip file.</td>
345  </tr>
346</table>
347
348The number of jobs and stages which can retrieved is constrained by the same retention
349mechanism of the standalone Spark UI; `"spark.ui.retainedJobs"` defines the threshold
350value triggering garbage collection on jobs, and `spark.ui.retainedStages` that for stages.
351Note that the garbage collection takes place on playback: it is possible to retrieve
352more entries by increasing these values and restarting the history server.
353
354### API Versioning Policy
355
356These endpoints have been strongly versioned to make it easier to develop applications on top.
357 In particular, Spark guarantees:
358
359* Endpoints will never be removed from one version
360* Individual fields will never be removed for any given endpoint
361* New endpoints may be added
362* New fields may be added to existing endpoints
363* New versions of the api may be added in the future at a separate endpoint (eg., `api/v2`).  New versions are *not* required to be backwards compatible.
364* Api versions may be dropped, but only after at least one minor release of co-existing with a new api version.
365
366Note that even when examining the UI of a running applications, the `applications/[app-id]` portion is
367still required, though there is only one application available.  Eg. to see the list of jobs for the
368running app, you would go to `http://localhost:4040/api/v1/applications/[app-id]/jobs`.  This is to
369keep the paths consistent in both modes.
370
371# Metrics
372
373Spark has a configurable metrics system based on the
374[Dropwizard Metrics Library](http://metrics.dropwizard.io/).
375This allows users to report Spark metrics to a variety of sinks including HTTP, JMX, and CSV
376files. The metrics system is configured via a configuration file that Spark expects to be present
377at `$SPARK_HOME/conf/metrics.properties`. A custom file location can be specified via the
378`spark.metrics.conf` [configuration property](configuration.html#spark-properties).
379By default, the root namespace used for driver or executor metrics is
380the value of `spark.app.id`. However, often times, users want to be able to track the metrics
381across apps for driver and executors, which is hard to do with application ID
382(i.e. `spark.app.id`) since it changes with every invocation of the app. For such use cases,
383a custom namespace can be specified for metrics reporting using `spark.metrics.namespace`
384configuration property.
385If, say, users wanted to set the metrics namespace to the name of the application, they
386can set the `spark.metrics.namespace` property to a value like `${spark.app.name}`. This value is
387then expanded appropriately by Spark and is used as the root namespace of the metrics system.
388Non driver and executor metrics are never prefixed with `spark.app.id`, nor does the
389`spark.metrics.namespace` property have any such affect on such metrics.
390
391Spark's metrics are decoupled into different
392_instances_ corresponding to Spark components. Within each instance, you can configure a
393set of sinks to which metrics are reported. The following instances are currently supported:
394
395* `master`: The Spark standalone master process.
396* `applications`: A component within the master which reports on various applications.
397* `worker`: A Spark standalone worker process.
398* `executor`: A Spark executor.
399* `driver`: The Spark driver process (the process in which your SparkContext is created).
400* `shuffleService`: The Spark shuffle service.
401
402Each instance can report to zero or more _sinks_. Sinks are contained in the
403`org.apache.spark.metrics.sink` package:
404
405* `ConsoleSink`: Logs metrics information to the console.
406* `CSVSink`: Exports metrics data to CSV files at regular intervals.
407* `JmxSink`: Registers metrics for viewing in a JMX console.
408* `MetricsServlet`: Adds a servlet within the existing Spark UI to serve metrics data as JSON data.
409* `GraphiteSink`: Sends metrics to a Graphite node.
410* `Slf4jSink`: Sends metrics to slf4j as log entries.
411
412Spark also supports a Ganglia sink which is not included in the default build due to
413licensing restrictions:
414
415* `GangliaSink`: Sends metrics to a Ganglia node or multicast group.
416
417To install the `GangliaSink` you'll need to perform a custom build of Spark. _**Note that
418by embedding this library you will include [LGPL](http://www.gnu.org/copyleft/lesser.html)-licensed
419code in your Spark package**_. For sbt users, set the
420`SPARK_GANGLIA_LGPL` environment variable before building. For Maven users, enable
421the `-Pspark-ganglia-lgpl` profile. In addition to modifying the cluster's Spark build
422user applications will need to link to the `spark-ganglia-lgpl` artifact.
423
424The syntax of the metrics configuration file is defined in an example configuration file,
425`$SPARK_HOME/conf/metrics.properties.template`.
426
427# Advanced Instrumentation
428
429Several external tools can be used to help profile the performance of Spark jobs:
430
431* Cluster-wide monitoring tools, such as [Ganglia](http://ganglia.sourceforge.net/), can provide
432insight into overall cluster utilization and resource bottlenecks. For instance, a Ganglia
433dashboard can quickly reveal whether a particular workload is disk bound, network bound, or
434CPU bound.
435* OS profiling tools such as [dstat](http://dag.wieers.com/home-made/dstat/),
436[iostat](http://linux.die.net/man/1/iostat), and [iotop](http://linux.die.net/man/1/iotop)
437can provide fine-grained profiling on individual nodes.
438* JVM utilities such as `jstack` for providing stack traces, `jmap` for creating heap-dumps,
439`jstat` for reporting time-series statistics and `jconsole` for visually exploring various JVM
440properties are useful for those comfortable with JVM internals.
441