1.. _volume_rendering:
2
33D Visualization and Volume Rendering
4=====================================
5
6yt has the ability to create 3D visualizations using a process known as *volume
7rendering* (oftentimes abbreviated VR).  This volume rendering code differs
8from the standard yt infrastructure for generating :ref:`simple-inspection`
9in that it evaluates the radiative transfer equations through the volume with
10user-defined transfer functions for each ray.  Thus it can accommodate both
11opaque and transparent structures appropriately.  Currently all of the
12rendering capabilities are implemented in software, requiring no specialized
13hardware. Optimized versions implemented with OpenGL and utilizing graphics
14processors are being actively developed.
15
16.. note::
17
18   There is a Jupyter notebook containing a volume rendering tutorial available
19   at :ref:`volume-rendering-tutorial`.
20
21Volume Rendering Introduction
22-----------------------------
23
24Constructing a 3D visualization is a process of describing the "scene" that
25will be rendered.  This includes the location of the viewing point (i.e., where
26the "camera" is placed), the method by which a system would be viewed (i.e.,
27the "lens," which may be orthographic, perspective, fisheye, spherical, and so
28on) and the components that will be rendered (render "sources," such as volume
29elements, lines, annotations, and opaque surfaces).  The 3D plotting
30infrastructure then develops a resultant image from this scene, which can be
31saved to a file or viewed inline.
32
33By constructing the scene in this programmatic way, full control can be had
34over each component in the scene as well as the method by which the scene is
35rendered; this can be used to prototype visualizations, inject annotation such
36as grid or continent lines, and then to render a production-quality
37visualization.  By changing the "lens" used, a single camera path can output
38images suitable for planetarium domes, immersive and head tracking systems
39(such as the Oculus Rift or recent 360-degree/virtual reality movie viewers
40such as the mobile YouTube app), as well as standard screens.
41
42.. image:: _images/scene_diagram.svg
43   :width: 50%
44   :align: center
45   :alt: Diagram of a 3D Scene
46
47.. _scene-description:
48
49Volume Rendering Components
50---------------------------
51
52The Scene class and its subcomponents are organized as follows.  Indented
53objects *hang* off of their parent object.
54
55* :ref:`Scene <scene>` - container object describing a volume and its contents
56    * :ref:`Sources <render-sources>` - objects to be rendered
57        * :ref:`VolumeSource <volume-sources>` - simulation volume tied to a dataset
58            * :ref:`TransferFunction <transfer_functions>` - mapping of simulation field values to color, brightness, and transparency
59        * :ref:`OpaqueSource <opaque-sources>` - Opaque structures like lines, dots, etc.
60        * :ref:`Annotations <volume_rendering_annotations>` - Annotated structures like grid cells, simulation boundaries, etc.
61    * :ref:`Camera <camera>` - object for rendering; consists of a location, focus, orientation, and resolution
62        * :ref:`Lens <lenses>` - object describing method for distributing rays through Sources
63
64.. _scene:
65
66Scene
67^^^^^
68
69The :class:`~yt.visualization.volume_rendering.scene.Scene`
70is the container class which encompasses the whole of the volume
71rendering interface.  At its base level, it describes an infinite volume,
72with a series of
73:class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects
74hanging off of it that describe the contents
75of that volume.  It also contains a
76:class:`~yt.visualization.volume_rendering.camera.Camera` for rendering that
77volume.  All of its classes can be
78accessed and modified as properties hanging off of the scene.
79The scene's most important functions are
80:meth:`~yt.visualization.volume_rendering.scene.Scene.render` for
81casting rays through the scene and
82:meth:`~yt.visualization.volume_rendering.scene.Scene.save` for saving the
83resulting rendered image to disk (see note on :ref:`when_to_render`).
84
85The easiest way to create a scene with sensible defaults is to use the
86functions:
87:func:`~yt.visualization.volume_rendering.volume_rendering.create_scene`
88(creates the scene) or
89:func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
90(creates the scene and then triggers ray tracing to produce an image).
91See the :ref:`annotated-vr-example` for details.
92
93.. _render-sources:
94
95RenderSources
96^^^^^^^^^^^^^
97
98:class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects
99comprise the contents of what is actually *rendered*.  One can add several
100different RenderSources to a Scene and the ray-tracing step will pass rays
101through all of them to produce the final rendered image.
102
103.. _volume-sources:
104
105VolumeSources
106+++++++++++++
107
108:class:`~yt.visualization.volume_rendering.render_source.VolumeSource` objects
109are 3D :ref:`geometric-objects` of individual datasets placed into the scene
110for rendering.  Each VolumeSource requires a
111:ref:`TransferFunction <transfer_functions>` to describe how the fields in
112the VolumeSource dataset produce different colors and brightnesses in the
113resulting image.
114
115.. _opaque-sources:
116
117OpaqueSources
118+++++++++++++
119
120In addition to semi-transparent objects, fully opaque structures can be added
121to a scene as
122:class:`~yt.visualization.volume_rendering.render_source.OpaqueSource` objects
123including
124:class:`~yt.visualization.volume_rendering.render_source.LineSource` objects
125and
126:class:`~yt.visualization.volume_rendering.render_source.PointSource` objects.
127These are useful if you want to annotate locations or particles in an image,
128or if you want to draw lines connecting different regions or
129vertices.  For instance, lines can be used to draw outlines of regions or
130continents.
131
132Worked examples of using the ``LineSource`` and ``PointSource`` are available at
133:ref:`cookbook-vol-points` and :ref:`cookbook-vol-lines`.
134
135.. _volume_rendering_annotations:
136
137Annotations
138+++++++++++
139
140Similar to OpaqueSources, annotations enable the user to highlight
141certain information with opaque structures.  Examples include
142:class:`~yt.visualization.volume_rendering.api.BoxSource`,
143:class:`~yt.visualization.volume_rendering.api.GridSource`, and
144:class:`~yt.visualization.volume_rendering.api.CoordinateVectorSource`.  These
145annotations will operate in data space and can draw boxes, grid information,
146and also provide a vector orientation within the image.
147
148For example scripts using these features,
149see :ref:`cookbook-volume_rendering_annotations`.
150
151.. _transfer_functions:
152
153Transfer Functions
154^^^^^^^^^^^^^^^^^^
155
156A transfer function describes how rays that pass through the domain of a
157:class:`~yt.visualization.volume_rendering.render_source.VolumeSource` are
158mapped from simulation field values to color, brightness, and opacity in the
159resulting rendered image.  A transfer function consists of an array over
160the x and y dimensions.  The x dimension typically represents field values in
161your underlying dataset to which you want your rendering to be sensitive (e.g.
162density from 1e20 to 1e23).  The y dimension consists of 4 channels for red,
163green, blue, and alpha (opacity).  A transfer function starts with all zeros
164for its y dimension values, implying that rays traversing the VolumeSource
165will not show up at all in the final image.  However, you can add features to
166the transfer function that will highlight certain field values in your
167rendering.
168
169.. _transfer-function-helper:
170
171TransferFunctionHelper
172++++++++++++++++++++++
173
174Because good transfer functions can be difficult to generate, the
175:class:`~yt.visualization.volume_rendering.transfer_function_helper.TransferFunctionHelper`
176exists in order to help create and modify transfer functions with smart
177defaults for your datasets.
178
179To ease constructing transfer functions, each ``VolumeSource`` instance has a
180``TransferFunctionHelper`` instance associated with it. This is the easiest way
181to construct and customize a ``ColorTransferFunction`` for a volume rendering.
182
183In the following example, we make use of the ``TransferFunctionHelper``
184associated with a scene's ``VolumeSource`` to create an appealing transfer
185function between a physically motivated range of densities in a cosmological
186simulation:
187
188.. python-script::
189
190   import yt
191
192   ds = yt.load("Enzo_64/DD0043/data0043")
193
194   sc = yt.create_scene(ds, lens_type="perspective")
195
196   # Get a reference to the VolumeSource associated with this scene
197   # It is the first source associated with the scene, so we can refer to it
198   # using index 0.
199   source = sc[0]
200
201   # Set the bounds of the transfer function
202   source.tfh.set_bounds((3e-31, 5e-27))
203
204   # set that the transfer function should be evaluated in log space
205   source.tfh.set_log(True)
206
207   # Make underdense regions appear opaque
208   source.tfh.grey_opacity = True
209
210   # Plot the transfer function, along with the CDF of the density field to
211   # see how the transfer function corresponds to structure in the CDF
212   source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
213
214   # save the image, flooring especially bright pixels for better contrast
215   sc.save("rendering.png", sigma_clip=6.0)
216
217For fun, let's make the same volume_rendering, but this time setting
218``grey_opacity=False``, which will make overdense regions stand out more:
219
220.. python-script::
221
222   import yt
223
224   ds = yt.load("Enzo_64/DD0043/data0043")
225
226   sc = yt.create_scene(ds, lens_type="perspective")
227
228   source = sc[0]
229
230   # Set transfer function properties
231   source.tfh.set_bounds((3e-31, 5e-27))
232   source.tfh.set_log(True)
233   source.tfh.grey_opacity = False
234
235   source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
236
237   sc.save("rendering.png", sigma_clip=4.0)
238
239To see a full example on how to use the ``TransferFunctionHelper`` interface,
240follow the annotated :ref:`transfer-function-helper-tutorial`.
241
242Color Transfer Functions
243++++++++++++++++++++++++
244
245A :class:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction`
246is the standard way to map dataset field values to colors, brightnesses,
247and opacities in the rendered rays.  One can add discrete features to the
248transfer function, which will render isocontours in the field data and
249works well for visualizing nested structures in a simulation.  Alternatively,
250one can also add continuous features to the transfer function.
251
252See :ref:`cookbook-custom-transfer-function` for an annotated, runnable tutorial
253explaining usage of the ColorTransferFunction.
254
255There are several methods to create a
256:class:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction`
257for a volume rendering. We will describe the low-level interface for
258constructing color transfer functions here, and provide examples for each
259option.
260
261add_layers
262""""""""""
263
264The easiest way to create a ColorTransferFunction is to use the
265:meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.add_layers` function,
266which will add evenly spaced isocontours along the transfer function, sampling a
267colormap to determine the colors of the layers.
268
269.. python-script::
270
271   import numpy as np
272
273   import yt
274
275   ds = yt.load("Enzo_64/DD0043/data0043")
276
277   sc = yt.create_scene(ds, lens_type="perspective")
278
279   source = sc[0]
280
281   source.set_field(("gas", "density"))
282   source.set_log(True)
283
284   bounds = (3e-31, 5e-27)
285
286   # Since this rendering is done in log space, the transfer function needs
287   # to be specified in log space.
288   tf = yt.ColorTransferFunction(np.log10(bounds))
289
290   tf.add_layers(5, colormap="arbre")
291
292   source.tfh.tf = tf
293   source.tfh.bounds = bounds
294
295   source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
296
297   sc.save("rendering.png", sigma_clip=6)
298
299sample_colormap
300"""""""""""""""
301
302To add a single gaussian layer with a color determined by a colormap value, use
303:meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.sample_colormap`.
304
305.. python-script::
306
307   import numpy as np
308
309   import yt
310
311   ds = yt.load("Enzo_64/DD0043/data0043")
312
313   sc = yt.create_scene(ds, lens_type="perspective")
314
315   source = sc[0]
316
317   source.set_field(("gas", "density"))
318   source.set_log(True)
319
320   bounds = (3e-31, 5e-27)
321
322   # Since this rendering is done in log space, the transfer function needs
323   # to be specified in log space.
324   tf = yt.ColorTransferFunction(np.log10(bounds))
325
326   tf.sample_colormap(np.log10(1e-30), w=0.01, colormap="arbre")
327
328   source.tfh.tf = tf
329   source.tfh.bounds = bounds
330
331   source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
332
333   sc.save("rendering.png", sigma_clip=6)
334
335
336add_gaussian
337""""""""""""
338
339If you would like to add a gaussian with a customized color or no color, use
340:meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.add_gaussian`.
341
342.. python-script::
343
344   import numpy as np
345
346   import yt
347
348   ds = yt.load("Enzo_64/DD0043/data0043")
349
350   sc = yt.create_scene(ds, lens_type="perspective")
351
352   source = sc[0]
353
354   source.set_field(("gas", "density"))
355   source.set_log(True)
356
357   bounds = (3e-31, 5e-27)
358
359   # Since this rendering is done in log space, the transfer function needs
360   # to be specified in log space.
361   tf = yt.ColorTransferFunction(np.log10(bounds))
362
363   tf.add_gaussian(np.log10(1e-29), width=0.005, height=[0.753, 1.0, 0.933, 1.0])
364
365   source.tfh.tf = tf
366   source.tfh.bounds = bounds
367
368   source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
369
370   sc.save("rendering.png", sigma_clip=6)
371
372
373map_to_colormap
374"""""""""""""""
375
376Finally, to map a colormap directly to a range in densities use
377:meth:`~yt.visualization.volume_rendering.transfer_functions.ColorTransferFunction.map_to_colormap`. This
378makes it possible to map a segment of the transfer function space to a colormap
379at a single alpha value. Where the above options produced layered volume
380renderings, this allows all of the density values in a dataset to contribute to
381the volume rendering.
382
383.. python-script::
384
385   import numpy as np
386
387   import yt
388
389   ds = yt.load("Enzo_64/DD0043/data0043")
390
391   sc = yt.create_scene(ds, lens_type="perspective")
392
393   source = sc[0]
394
395   source.set_field(("gas", "density"))
396   source.set_log(True)
397
398   bounds = (3e-31, 5e-27)
399
400   # Since this rendering is done in log space, the transfer function needs
401   # to be specified in log space.
402   tf = yt.ColorTransferFunction(np.log10(bounds))
403
404
405   def linramp(vals, minval, maxval):
406       return (vals - vals.min()) / (vals.max() - vals.min())
407
408
409   tf.map_to_colormap(
410       np.log10(3e-31), np.log10(5e-27), colormap="arbre", scale_func=linramp
411   )
412
413   source.tfh.tf = tf
414   source.tfh.bounds = bounds
415
416   source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
417
418   sc.save("rendering.png", sigma_clip=6)
419
420Projection Transfer Function
421++++++++++++++++++++++++++++
422
423This is designed to allow you to generate projections like what you obtain
424from the standard :ref:`projection-plots`, and it forms the basis of
425:ref:`off-axis-projections`.  See :ref:`cookbook-offaxis_projection` for a
426simple example.  Note that the integration here is scaled to a width of 1.0;
427this means that if you want to apply a colorbar, you will have to multiply by
428the integration width (specified when you initialize the volume renderer) in
429whatever units are appropriate.
430
431Planck Transfer Function
432++++++++++++++++++++++++
433
434This transfer function is designed to apply a semi-realistic color field based
435on temperature, emission weighted by density, and approximate scattering based
436on the density.  This class is currently under-documented, and it may be best
437to examine the source code to use it.
438
439More Complicated Transfer Functions
440+++++++++++++++++++++++++++++++++++
441
442For more complicated transfer functions, you can use the
443:class:`~yt.visualization.volume_rendering.transfer_functions.MultiVariateTransferFunction`
444object.  This allows for a set of weightings, linkages and so on.
445All of the information about how all transfer functions are used and values are
446extracted is contained in the sourcefile ``utilities/lib/grid_traversal.pyx``.
447For more information on how the transfer function is actually applied, look
448over the source code there.
449
450.. _camera:
451
452Camera
453^^^^^^
454
455The :class:`~yt.visualization.volume_rendering.camera.Camera` object
456is what it sounds like, a camera within the Scene.  It possesses the
457quantities:
458
459* :meth:`~yt.visualization.volume_rendering.camera.Camera.position` - the position of the camera in scene-space
460* :meth:`~yt.visualization.volume_rendering.camera.Camera.width` - the width of the plane the camera can see
461* :meth:`~yt.visualization.volume_rendering.camera.Camera.focus` - the point in space the camera is looking at
462* :meth:`~yt.visualization.volume_rendering.camera.Camera.resolution` - the image resolution
463* ``north_vector`` - a vector defining the "up" direction in an image
464* :ref:`lens <lenses>` - an object controlling how rays traverse the Scene
465
466.. _camera_movement:
467
468Moving and Orienting the Camera
469+++++++++++++++++++++++++++++++
470
471There are multiple ways to manipulate the camera viewpoint and orientation.
472One can set the properties listed above explicitly, or one can use the
473:class:`~yt.visualization.volume_rendering.camera.Camera` helper methods.
474In either case, any change triggers an update of all of the other properties.
475Note that the camera exists in a right-handed coordinate system centered on
476the camera.
477
478Rotation-related methods
479 * :meth:`~yt.visualization.volume_rendering.camera.Camera.pitch` - rotate about the lateral axis
480 * :meth:`~yt.visualization.volume_rendering.camera.Camera.yaw` - rotate about the vertical axis (i.e. ``north_vector``)
481 * :meth:`~yt.visualization.volume_rendering.camera.Camera.roll` - rotate about the longitudinal axis (i.e. ``normal_vector``)
482 * :meth:`~yt.visualization.volume_rendering.camera.Camera.rotate` - rotate about an arbitrary axis
483 * :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_rotate` - iteratively rotate about an arbitrary axis
484
485For the rotation methods, the camera pivots around the ``rot_center`` rotation
486center.  By default, this is the camera position, which means that the
487camera doesn't change its position at all, it just changes its orientation.
488
489Zoom-related methods
490 * :meth:`~yt.visualization.volume_rendering.camera.Camera.set_width` - change the width of the FOV
491 * :meth:`~yt.visualization.volume_rendering.camera.Camera.zoom` - change the width of the FOV
492 * :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_zoom` - iteratively change the width of the FOV
493
494Perhaps counterintuitively, the camera does not get closer to the focus
495during a zoom; it simply reduces the width of the field of view.
496
497Translation-related methods
498 * :meth:`~yt.visualization.volume_rendering.camera.Camera.set_position` - change the location of the camera keeping the focus fixed
499 * :meth:`~yt.visualization.volume_rendering.camera.Camera.iter_move` - iteratively change the location of the camera keeping the focus fixed
500
501The iterative methods provide iteration over a series of changes in the
502position or orientation of the camera.  These can be used within a loop.
503For an example on how to use all of these camera movement functions, see
504:ref:`cookbook-camera_movement`.
505
506.. _lenses:
507
508Camera Lenses
509^^^^^^^^^^^^^
510
511Cameras possess :class:`~yt.visualization.volume_rendering.lens.Lens` objects,
512which control the geometric path in which rays travel to the camera.  These
513lenses can be swapped in and out of an existing camera to produce different
514views of the same Scene.  For a full demonstration of a Scene object
515rendered with different lenses, see :ref:`cookbook-various_lens`.
516
517Plane Parallel
518++++++++++++++
519
520The :class:`~yt.visualization.volume_rendering.lens.PlaneParallelLens` is the
521standard lens type used for orthographic projections.  All rays emerge
522parallel to each other, arranged along a plane.
523
524Perspective and Stereo Perspective
525++++++++++++++++++++++++++++++++++
526
527The :class:`~yt.visualization.volume_rendering.lens.PerspectiveLens`
528adjusts for an opening view angle, so that the scene will have an
529element of perspective to it.
530:class:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens`
531is identical to PerspectiveLens, but it produces two images from nearby
532camera positions for use in 3D viewing. How 3D the image appears at viewing
533will depend upon the value of
534:attr:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens.disparity`,
535which is half the maximum distance between two corresponding points in the left
536and right images. By default, it is set to 3 pixels.
537
538
539Fisheye or Dome
540+++++++++++++++
541
542The :class:`~yt.visualization.volume_rendering.lens.FisheyeLens`
543is appropriate for viewing an arbitrary field of view.  Fisheye images
544are typically used for dome-based presentations; the Hayden planetarium
545for instance has a field of view of 194.6.  The images returned by this
546camera will be flat pixel images that can and should be reshaped to the
547resolution.
548
549Spherical and Stereo Spherical
550++++++++++++++++++++++++++++++
551
552The :class:`~yt.visualization.volume_rendering.lens.SphericalLens` produces
553a cylindrical-spherical projection.  Movies rendered in this way can be
554displayed as YouTube 360-degree videos (for more information see
555`the YouTube help: Upload 360-degree videos
556<https://support.google.com/youtube/answer/6178631?hl=en>`_).
557:class:`~yt.visualization.volume_rendering.lens.StereoSphericalLens`
558is identical to :class:`~yt.visualization.volume_rendering.lens.SphericalLens`
559but it produces two images from nearby camera positions for virtual reality
560movies, which can be displayed in head-tracking devices (e.g. Oculus Rift)
561or in mobile YouTube app with Google Cardboard (for more information
562see `the YouTube help: Upload virtual reality videos
563<https://support.google.com/youtube/answer/6316263?hl=en>`_).
564`This virtual reality video
565<https://youtu.be/ZYWY53X7UQE>`_ on YouTube is an example produced with
566:class:`~yt.visualization.volume_rendering.lens.StereoSphericalLens`. As in
567the case of
568:class:`~yt.visualization.volume_rendering.lens.StereoPerspectiveLens`, the
569difference between the two images can be controlled by changing the value of
570:attr:`~yt.visualization.volume_rendering.lens.StereoSphericalLens.disparity`
571(See above).
572
573.. _annotated-vr-example:
574
575Annotated Examples
576------------------
577
578.. warning:: 3D visualizations can be fun but frustrating!  Tuning the
579             parameters to both look nice and convey useful scientific
580             information can be hard.  We've provided information about best
581             practices and tried to make the interface easy to develop nice
582             visualizations, but getting them *just right* is often
583             time-consuming.  It's usually best to start out simple and expand
584             and tweak as needed.
585
586The scene interface provides a modular interface for creating renderings
587of arbitrary data sources. As such, manual composition of a scene can require
588a bit more work, but we will also provide several helper functions that attempt
589to create satisfactory default volume renderings.
590
591When the
592:func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
593function is called, first an empty
594:class:`~yt.visualization.volume_rendering.scene.Scene` object is created.
595Next, a :class:`~yt.visualization.volume_rendering.api.VolumeSource`
596object is created, which decomposes the volume elements
597into a tree structure to provide back-to-front rendering of fixed-resolution
598blocks of data.  (If the volume elements are grids, this uses a
599:class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` object.) When the
600:class:`~yt.visualization.volume_rendering.api.VolumeSource`
601object is created, by default it will create a transfer function
602based on the extrema of the field that you are rendering. The transfer function
603describes how rays that pass through the domain are "transferred" and thus how
604brightness and color correlates to the field values.  Modifying and adjusting
605the transfer function is the primary way to modify the appearance of an image
606based on volumes.
607
608Once the basic set of objects to be rendered is constructed (e.g.
609:class:`~yt.visualization.volume_rendering.scene.Scene`,
610:class:`~yt.visualization.volume_rendering.render_source.RenderSource`, and
611:class:`~yt.visualization.volume_rendering.api.VolumeSource` objects) , a
612:class:`~yt.visualization.volume_rendering.camera.Camera` is created and
613added to the scene.  By default the creation of a camera also creates a
614plane-parallel :class:`~yt.visualization.volume_rendering.lens.Lens`
615object. The analog to a real camera is intentional -- a camera can take a
616picture of a scene from a particular point in time and space, but different
617lenses can be swapped in and out.  For example, this might include a fisheye
618lens, a spherical lens, or some other method of describing the direction and
619origin of rays for rendering. Once the camera is added to the scene object, we
620call the main methods of the
621:class:`~yt.visualization.volume_rendering.scene.Scene` class,
622:meth:`~yt.visualization.volume_rendering.scene.Scene.render` and
623:meth:`~yt.visualization.volume_rendering.scene.Scene.save`.  When rendered,
624the scene will loop through all of the
625:class:`~yt.visualization.volume_rendering.render_source.RenderSource` objects
626that have been added and integrate the radiative transfer equations through the
627volume. Finally, the image and scene object is returned to the user. An example
628script the uses the high-level :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
629function to quickly set up defaults is:
630
631.. python-script::
632
633  import yt
634
635  # load the data
636  ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
637
638  # volume render the ("gas", "density") field, and save the resulting image
639  im, sc = yt.volume_render(ds, ("gas", "density"), fname="rendering.png")
640
641  # im is the image array generated. it is also saved to 'rendering.png'.
642  # sc is an instance of a Scene object, which allows you to further refine
643  # your renderings and later save them.
644
645  # Let's zoom in and take a closer look
646  sc.camera.width = (300, "kpc")
647  sc.camera.switch_orientation()
648
649  # Save the zoomed in rendering
650  sc.save("zoomed_rendering.png")
651
652Alternatively, if you don't want to immediately generate an image of your
653volume rendering, and you just want access to the default scene object,
654you can skip the expensive operation of rendering by just running the
655:func:`~yt.visualization.volume_rendering.volume_rendering.create_scene`
656function in lieu of the
657:func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
658function. Example:
659
660.. python-script::
661
662    import numpy as np
663
664    import yt
665
666    ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
667    sc = yt.create_scene(ds, ("gas", "density"))
668
669    source = sc[0]
670
671    source.transfer_function = yt.ColorTransferFunction(
672        np.log10((1e-30, 1e-23)), grey_opacity=True
673    )
674
675
676    def linramp(vals, minval, maxval):
677        return (vals - vals.min()) / (vals.max() - vals.min())
678
679
680    source.transfer_function.map_to_colormap(
681        np.log10(1e-25), np.log10(8e-24), colormap="arbre", scale_func=linramp
682    )
683
684    # For this low resolution dataset it's very important to use interpolated
685    # vertex centered data to avoid artifacts. For high resolution data this
686    # setting may cause a substantial slowdown for marginal visual improvement.
687    source.set_use_ghost_zones(True)
688
689    cam = sc.camera
690
691    cam.width = 15 * yt.units.kpc
692    cam.focus = ds.domain_center
693    cam.normal_vector = [-0.3, -0.3, 1]
694    cam.switch_orientation()
695
696    sc.save("rendering.png")
697
698For an in-depth tutorial on how to create a Scene and modify its contents,
699see this annotated :ref:`volume-rendering-tutorial`.
700
701
702.. _volume-rendering-method:
703
704Volume Rendering Method
705-----------------------
706
707Direct ray casting through a volume enables the generation of new types of
708visualizations and images describing a simulation.  yt has the facility
709to generate volume renderings by a direct ray casting method.  However, the
710ability to create volume renderings informed by analysis by other mechanisms --
711for instance, halo location, angular momentum, spectral energy distributions --
712is useful.
713
714The volume rendering in yt follows a relatively straightforward approach.
715
716#. Create a set of transfer functions governing the emission and absorption as
717   a function of one or more variables. (:math:`f(v) \rightarrow (r,g,b,a)`)
718   These can be functions of any field variable, weighted by independent
719   fields, and even weighted by other evaluated transfer functions.  (See
720   ref:`_transfer_functions`.)
721#. Partition all chunks into non-overlapping, fully domain-tiling "bricks."
722   Each of these "bricks" contains the finest available data at any location.
723#. Generate vertex-centered data for all grids in the volume rendered domain.
724#. Order the bricks from front-to-back.
725#. Construct plane of rays parallel to the image plane, with initial values set
726   to zero and located at the back of the region to be rendered.
727#. For every brick, identify which rays intersect.  These are then each 'cast'
728   through the brick.
729
730   #. Every cell a ray intersects is sampled 5 times (adjustable by parameter),
731      and data values at each sampling point are trilinearly interpolated from
732      the vertex-centered data.
733   #. Each transfer function is evaluated at each sample point.  This gives us,
734      for each channel, both emission (:math:`j`) and absorption
735      (:math:`\alpha`) values.
736   #. The value for the pixel corresponding to the current ray is updated with
737      new values calculated by rectangular integration over the path length:
738
739      :math:`v^{n+1}_{i} =  j_{i}\Delta s + (1 - \alpha_{i}\Delta s )v^{n}_{i}`
740
741      where :math:`n` and :math:`n+1` represent the pixel before and after
742      passing through a sample, :math:`i` is the color (red, green, blue) and
743      :math:`\Delta s` is the path length between samples.
744   #. Determine if any addition integrate will change the sample value; if not,
745      terminate integration.  (This reduces integration time when rendering
746      front-to-back.)
747#. The image is returned to the user:
748
749.. image:: _images/vr_sample.jpg
750   :width: 512
751
752Parallelism
753-----------
754
755yt can utilize both MPI and OpenMP parallelism for volume rendering.  Both, and
756their combination, are described below.
757
758MPI Parallelization
759^^^^^^^^^^^^^^^^^^^
760
761Currently the volume renderer is parallelized using MPI to decompose the volume
762by attempting to split up the
763:class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` in a balanced way.  This
764has two advantages:
765
766#.  The :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree`
767    construction is parallelized since each MPI task only needs
768    to know about the part of the tree it will traverse.
769#.  Each MPI task will only read data for portion of the volume that it has
770    assigned.
771
772Once the :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree` has been
773constructed, each MPI task begins the rendering
774phase until all of its bricks are completed.  At that point, each MPI task has
775a full image plane which we then use a tree reduction to construct the final
776image, using alpha blending to add the images together at each reduction phase.
777
778Caveats:
779
780#.  At this time, the :class:`~yt.utilities.amr_kdtree.amr_kdtree.AMRKDTree`
781    can only be decomposed by a power of 2 MPI
782    tasks.  If a number of tasks not equal to a power of 2 are used, the largest
783    power of 2 below that number is used, and the remaining cores will be idle.
784    This issue is being actively addressed by current development.
785#.  Each MPI task, currently, holds the entire image plane.  Therefore when
786    image plane sizes get large (>2048^2), the memory usage can also get large,
787    limiting the number of MPI tasks you can use.  This is also being addressed
788    in current development by using image plane decomposition.
789
790For more information about enabling parallelism, see :ref:`parallel-computation`.
791
792OpenMP Parallelization
793^^^^^^^^^^^^^^^^^^^^^^
794
795The volume rendering also parallelized using the OpenMP interface in Cython.
796While the MPI parallelization is done using domain decomposition, the OpenMP
797threading parallelizes the rays intersecting a given brick of data.  As the
798average brick size relative to the image plane increases, the parallel
799efficiency increases.
800
801By default, the volume renderer will use the total number of cores available on
802the symmetric multiprocessing (SMP) compute platform.  For example, if you have
803a shiny new laptop with 8 cores, you'll by default launch 8 OpenMP threads.
804The number of threads can be controlled with the num_threads keyword in
805:meth:`~yt.visualization.volume_rendering.camera.Camera.snapshot`.  You may also restrict the number of OpenMP threads used
806by default by modifying the environment variable OMP_NUM_THREADS.
807
808Running in Hybrid MPI + OpenMP
809^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
810
811The two methods for volume rendering parallelization can be used together to
812leverage large supercomputing resources.  When choosing how to balance the
813number of MPI tasks vs OpenMP threads, there are a few things to keep in mind.
814For these examples, we will assume you are using Nmpi MPI tasks, and Nmp OpenMP
815tasks, on a total of P cores. We will assume that the machine has a Nnode SMP
816nodes, each with cores_per_node cores per node.
817
818#.  For each MPI task, num_threads (or OMP_NUM_THREADS) OpenMP threads will be
819    used. Therefore you should usually make sure that Nmpi*Nmp = P.
820#.  For simulations with many grids/AMRKDTree bricks, you generally want to increase Nmpi.
821#.  For simulations with large image planes (>2048^2), you generally want to
822    decrease Nmpi and increase Nmp. This is because, currently, each MPI task
823    stores the entire image plane, and doing so can approach the memory limits
824    of a given SMP node.
825#.  Please make sure you understand the (super)computer topology in terms of
826    the numbers of cores per socket, node, etc when making these decisions.
827#.  For many cases when rendering using your laptop/desktop, OpenMP will
828    provide a good enough speedup by default that it is preferable to launching
829    the MPI tasks.
830
831For more information about enabling parallelism, see :ref:`parallel-computation`.
832
833.. _vr-faq:
834
835Volume Rendering Frequently Asked Questions
836-------------------------------------------
837
838.. _opaque_rendering:
839
840Opacity
841^^^^^^^
842
843There are currently two models for opacity when rendering a volume, which are
844controlled in the ``ColorTransferFunction`` with the keyword
845``grey_opacity=False`` or ``True`` (the default). The first will act such for
846each of the red, green, and blue channels, each channel is only opaque to
847itself.  This means that if a ray that has some amount of red then encounters
848material that emits blue, the red will still exist and in the end that pixel
849will be a combination of blue and red.  However, if the ColorTransferFunction is
850set up with grey_opacity=True, then blue will be opaque to red, and only the
851blue emission will remain.
852
853For an in-depth example, please see the cookbook example on opaque renders here:
854:ref:`cookbook-opaque_rendering`.
855
856.. _sigma_clip:
857
858Improving Image Contrast with Sigma Clipping
859^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
860
861If your images appear to be too dark, you can try using the ``sigma_clip``
862keyword in the :meth:`~yt.visualization.volume_rendering.scene.Scene.save`
863or :func:`~yt.visualization.volume_rendering.volume_rendering.volume_render`
864functions.  Because the brightness range in an image is scaled to match the
865range of emissivity values of underlying rendering, if you have a few really
866high-emissivity points, they will scale the rest of your image to be quite
867dark.  ``sigma_clip = N`` can address this by removing values that are more
868than ``N`` standard deviations brighter than the mean of your image.
869Typically, a choice of 4 to 6 will help dramatically with your resulting image.
870See the cookbook recipe :ref:`cookbook-sigma_clip` for a demonstration.
871
872.. _when_to_render:
873
874When to Render
875^^^^^^^^^^^^^^
876
877The rendering of a scene is the most computationally demanding step in
878creating a final image and there are a number of ways to control at which point
879a scene is actually rendered. The default behavior of the
880:meth:`~yt.visualization.volume_rendering.scene.Scene.save` function includes
881a call to :meth:`~yt.visualization.volume_rendering.scene.Scene.render`. This
882means that in most cases (including the above examples), after you set up your
883scene and volumes, you can simply call
884:meth:`~yt.visualization.volume_rendering.scene.Scene.save` without first
885calling :meth:`~yt.visualization.volume_rendering.scene.Scene.render`. If you
886wish to save the most recently rendered image without rendering again, set
887``render=False`` in the call to
888:meth:`~yt.visualization.volume_rendering.scene.Scene.save`.  Cases where you
889may wish to use ``render=False`` include saving images at different
890``sigma_clip`` values (see :ref:`cookbook-sigma_clip`) or when saving an image
891that has already been rendered in a Jupyter notebook using
892:meth:`~yt.visualization.volume_rendering.scene.Scene.show`. Changes to the
893scene including adding sources, modifying transfer functions or adjusting camera
894settings generally require rendering again.
895