1# -*- coding: utf-8 -*-
2"""
3==========================================
4Crossing-preserving contextual enhancement
5==========================================
6
7This demo presents an example of crossing-preserving contextual enhancement of
8FOD/ODF fields [Meesters2016]_, implementing the contextual PDE framework
9of [Portegies2015a]_ for processing HARDI data. The aim is to enhance the
10alignment of elongated structures in the data such that crossing/junctions are
11maintained while reducing noise and small incoherent structures. This is
12achieved via a hypo-elliptic 2nd order PDE in the domain of coupled positions
13and orientations :math:`\mathbb{R}^3 \rtimes S^2`. This domain carries a
14non-flat geometrical differential structure that allows including a notion of
15alignment between neighboring points.
16
17Let :math:`({\bf y},{\bf n}) \in \mathbb{R}^3\rtimes S^2` where
18:math:`{\bf y} \in \mathbb{R}^{3}` denotes the spatial part, and
19:math:`{\bf n} \in S^2` the angular part.
20Let :math:`W:\mathbb{R}^3\rtimes S^2\times \mathbb{R}^{+} \to \mathbb{R}` be
21the function representing the evolution of FOD/ODF field. Then, the contextual
22PDE with evolution time :math:`t\geq 0` is given by:
23
24.. math::
25
26  \begin{cases}
27  \frac{\partial}{\partial t} W({\bf y},{\bf n},t) &= ((D^{33}({\bf n} \cdot
28          \nabla)^2 + D^{44} \Delta_{S^2})W)({\bf y},{\bf n},t)
29  \\ W({\bf y},{\bf n},0) &= U({\bf y},{\bf n})
30  \end{cases},
31
32where:
33
34* :math:`D^{33}>0` is  the coefficient for the spatial smoothing (which goes only in the direction of :math:`n`);
35
36* :math:`D^{44}>0` is the coefficient for the angular smoothing (here :math:`\Delta_{S^2}` denotes the Laplace-Beltrami operator on the sphere :math:`S^2`);
37
38* :math:`U:\mathbb{R}^3\rtimes S^2 \to \mathbb{R}` is the initial condition given by the noisy FOD/ODF’s field.
39
40This equation is solved via a shift-twist convolution (denoted by :math:`\ast_{\mathbb{R}^3\rtimes S^2}`) with its corresponding kernel :math:`P_t:\mathbb{R}^3\rtimes S^2 \to \mathbb{R}^+`:
41
42.. math::
43
44  W({\bf y},{\bf n},t) = (P_t \ast_{\mathbb{R}^3 \rtimes S^2} U)({\bf y},{\bf n})
45  = \int_{\mathbb{R}^3} \int_{S^2} P_t (R^T_{{\bf n}^\prime}({\bf y}-{\bf y}^\prime),
46   R^T_{{\bf n}^\prime} {\bf n} ) U({\bf y}^\prime, {\bf n}^\prime)
47
48Here, :math:`R_{\bf n}` is any 3D rotation that maps the vector :math:`(0,0,1)`
49onto :math:`{\bf n}`.
50
51Note that the shift-twist convolution differs from a Euclidean convolution and
52takes into account the non-flat structure of the space :math:`\mathbb{R}^3\rtimes S^2`.
53
54The kernel :math:`P_t` has a stochastic interpretation [DuitsAndFranken2011]_.
55It can be seen as the limiting distribution obtained by accumulating random
56walks of particles in the position/orientation domain, where in each step the
57particles can (randomly) move forward/backward along their current orientation,
58and (randomly) change their orientation. This is an extension to the 3D case of
59the process for contour enhancement of 2D images.
60
61.. figure:: _static/stochastic_process.jpg
62   :scale: 60 %
63   :align: center
64
65   The random motion of particles (a) and its corresponding probability map
66   (b) in 2D. The 3D kernel is shown on the right. Adapted from
67   [Portegies2015a]_.
68
69In practice, as the exact analytical formulas for the kernel :math:`P_t`
70are unknown, we use the approximation given in [Portegies2015b]_.
71
72"""
73
74"""
75The enhancement is evaluated on the Stanford HARDI dataset
76(150 orientations, b=2000 $s/mm^2$) where Rician noise is added. Constrained
77spherical deconvolution is used to model the fiber orientations.
78
79"""
80
81import numpy as np
82from dipy.core.gradients import gradient_table
83from dipy.data import get_fnames, default_sphere
84from dipy.io.image import load_nifti_data
85from dipy.io.gradients import read_bvals_bvecs
86from dipy.sims.voxel import add_noise
87
88# Read data
89hardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames('stanford_hardi')
90data = load_nifti_data(hardi_fname)
91bvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname)
92gtab = gradient_table(bvals, bvecs)
93
94# Add Rician noise
95from dipy.segment.mask import median_otsu
96b0_slice = data[:, :, :, 1]
97b0_mask, mask = median_otsu(b0_slice)
98np.random.seed(1)
99data_noisy = add_noise(data, 10.0, np.mean(b0_slice[mask]), noise_type='rician')
100
101# Select a small part of it.
102padding = 3  # Include a larger region to avoid boundary effects
103data_small = data[25-padding:40+padding, 65-padding:80+padding, 35:42]
104data_noisy_small = data_noisy[25-padding:40+padding,
105                              65-padding:80+padding,
106                              35:42]
107
108"""
109Enables/disables interactive visualization
110"""
111
112interactive = False
113
114"""
115Fit an initial model to the data, in this case Constrained Spherical
116Deconvolution is used.
117"""
118
119# Perform CSD on the original data
120from dipy.reconst.csdeconv import auto_response_ssst
121from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel
122response, ratio = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7)
123csd_model_orig = ConstrainedSphericalDeconvModel(gtab, response)
124csd_fit_orig = csd_model_orig.fit(data_small)
125csd_shm_orig = csd_fit_orig.shm_coeff
126
127# Perform CSD on the original data + noise
128response, ratio = auto_response_ssst(gtab, data_noisy, roi_radii=10, fa_thr=0.7)
129csd_model_noisy = ConstrainedSphericalDeconvModel(gtab, response)
130csd_fit_noisy = csd_model_noisy.fit(data_noisy_small)
131csd_shm_noisy = csd_fit_noisy.shm_coeff
132
133"""
134Inspired by [Rodrigues2010]_, a lookup-table is created, containing
135rotated versions of the kernel :math:`P_t` sampled over a discrete set of
136orientations. In order to ensure rotationally invariant processing, the
137discrete orientations are required to be equally distributed over a sphere.
138By default, a sphere with 100 directions is used.
139
140"""
141
142from dipy.denoise.enhancement_kernel import EnhancementKernel
143from dipy.denoise.shift_twist_convolution import convolve
144
145# Create lookup table
146D33 = 1.0
147D44 = 0.02
148t = 1
149k = EnhancementKernel(D33, D44, t)
150
151"""
152Visualize the kernel
153"""
154
155from dipy.viz import window, actor
156from dipy.reconst.shm import sf_to_sh, sh_to_sf
157scene = window.Scene()
158
159# convolve kernel with delta spike
160spike = np.zeros((7, 7, 7, k.get_orientations().shape[0]), dtype=np.float64)
161spike[3, 3, 3, 0] = 1
162spike_shm_conv = convolve(sf_to_sh(spike, k.get_sphere(), sh_order=8), k,
163                          sh_order=8, test_mode=True)
164
165spike_sf_conv = sh_to_sf(spike_shm_conv, default_sphere, sh_order=8)
166model_kernel = actor.odf_slicer(spike_sf_conv * 6,
167                                sphere=default_sphere,
168                                norm=False,
169                                scale=0.4)
170model_kernel.display(x=3)
171scene.add(model_kernel)
172scene.set_camera(position=(30, 0, 0), focal_point=(0, 0, 0), view_up=(0, 0, 1))
173window.record(scene, out_path='kernel.png', size=(900, 900))
174if interactive:
175    window.show(scene)
176
177"""
178.. figure:: kernel.png
179   :align: center
180
181   Visualization of the contour enhancement kernel.
182"""
183
184"""
185Shift-twist convolution is applied on the noisy data
186"""
187
188# Perform convolution
189csd_shm_enh = convolve(csd_shm_noisy, k, sh_order=8)
190
191
192"""
193The Sharpening Deconvolution Transform is applied to sharpen the ODF field.
194"""
195
196# Sharpen via the Sharpening Deconvolution Transform
197from dipy.reconst.csdeconv import odf_sh_to_sharp
198csd_shm_enh_sharp = odf_sh_to_sharp(csd_shm_enh, default_sphere, sh_order=8,
199                                    lambda_=0.1)
200
201# Convert raw and enhanced data to discrete form
202csd_sf_orig = sh_to_sf(csd_shm_orig, default_sphere, sh_order=8)
203csd_sf_noisy = sh_to_sf(csd_shm_noisy, default_sphere, sh_order=8)
204csd_sf_enh = sh_to_sf(csd_shm_enh, default_sphere, sh_order=8)
205csd_sf_enh_sharp = sh_to_sf(csd_shm_enh_sharp, default_sphere, sh_order=8)
206
207# Normalize the sharpened ODFs
208csd_sf_enh_sharp = csd_sf_enh_sharp * np.amax(csd_sf_orig)/np.amax(csd_sf_enh_sharp) * 1.25
209
210"""
211The end results are visualized. It can be observed that the end result after
212diffusion and sharpening is closer to the original noiseless dataset.
213"""
214
215scene = window.Scene()
216
217# original ODF field
218fodf_spheres_org = actor.odf_slicer(csd_sf_orig,
219                                    sphere=default_sphere,
220                                    scale=0.4,
221                                    norm=False)
222fodf_spheres_org.display(z=3)
223fodf_spheres_org.SetPosition(0, 25, 0)
224scene.add(fodf_spheres_org)
225
226# ODF field with added noise
227fodf_spheres = actor.odf_slicer(csd_sf_noisy,
228                                sphere=default_sphere,
229                                scale=0.4,
230                                norm=False,)
231fodf_spheres.SetPosition(0, 0, 0)
232scene.add(fodf_spheres)
233
234# Enhancement of noisy ODF field
235fodf_spheres_enh = actor.odf_slicer(csd_sf_enh,
236                                    sphere=default_sphere,
237                                    scale=0.4,
238                                    norm=False)
239fodf_spheres_enh.SetPosition(25, 0, 0)
240scene.add(fodf_spheres_enh)
241
242# Additional sharpening
243fodf_spheres_enh_sharp = actor.odf_slicer(csd_sf_enh_sharp,
244                                          sphere=default_sphere,
245                                          scale=0.4,
246                                          norm=False)
247fodf_spheres_enh_sharp.SetPosition(25, 25, 0)
248scene.add(fodf_spheres_enh_sharp)
249
250window.record(scene, out_path='enhancements.png', size=(900, 900))
251if interactive:
252    window.show(scene)
253
254"""
255
256.. figure:: enhancements.png
257   :align: center
258
259   The results after enhancements. Top-left: original noiseless data.
260   Bottom-left: original data with added Rician noise (SNR=10). Bottom-right:
261   After enhancement of noisy data. Top-right: After enhancement and sharpening
262   of noisy data.
263
264References
265----------
266
267.. [Meesters2016] S. Meesters, G. Sanguinetti, E. Garyfallidis, J. Portegies,
268   R. Duits. (2016) Fast implementations of contextual PDE’s for HARDI data
269   processing in DIPY. ISMRM 2016 conference.
270
271.. [Portegies2015a] J. Portegies, R. Fick, G. Sanguinetti, S. Meesters,
272   G.Girard, and R. Duits. (2015) Improving Fiber Alignment in HARDI by
273   Combining Contextual PDE flow with Constrained Spherical Deconvolution.
274   PLoS One.
275
276.. [Portegies2015b] J. Portegies, G. Sanguinetti, S. Meesters, and R. Duits.
277   (2015) New Approximation of a Scale Space Kernel on SE(3) and Applications
278   in Neuroimaging. Fifth International Conference on Scale Space and
279   Variational Methods in Computer Vision.
280
281.. [DuitsAndFranken2011] R. Duits and E. Franken (2011) Left-invariant
282   diffusions on the space of positions and orientations and their application
283   to crossing-preserving smoothing of HARDI images. International Journal of
284   Computer Vision, 92:231-264.
285
286.. [Rodrigues2010] P. Rodrigues, R. Duits, B. Romeny, A. Vilanova (2010).
287   Accelerated Diffusion Operators for Enhancing DW-MRI. Eurographics Workshop
288   on Visual Computing for Biology and Medicine. The Eurographics Association.
289
290"""
291