1
2 /*#******************************************************************************
3 ** IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
4 **
5 ** By downloading, copying, installing or using the software you agree to this license.
6 ** If you do not agree to this license, do not download, install,
7 ** copy or use the software.
8 **
9 **
10 ** bioinspired : interfaces allowing OpenCV users to integrate Human Vision System models. Presented models originate from Jeanny Herault's original research and have been reused and adapted by the author&collaborators for computed vision applications since his thesis with Alice Caplier at Gipsa-Lab.
11 **
12 ** Maintainers : Listic lab (code author current affiliation & applications) and Gipsa Lab (original research origins & applications)
13 **
14 ** Creation - enhancement process 2007-2013
15 ** Author: Alexandre Benoit (benoit.alexandre.vision@gmail.com), LISTIC lab, Annecy le vieux, France
16 **
17 ** Theses algorithm have been developped by Alexandre BENOIT since his thesis with Alice Caplier at Gipsa-Lab (www.gipsa-lab.inpg.fr) and the research he pursues at LISTIC Lab (www.listic.univ-savoie.fr).
18 ** Refer to the following research paper for more information:
19 ** Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
20 ** This work have been carried out thanks to Jeanny Herault who's research and great discussions are the basis of all this work, please take a look at his book:
21 ** Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
22 **
23 **
24 ** This class is based on image processing tools of the author and already used within the Retina class (this is the same code as method retina::applyFastToneMapping, but in an independent class, it is ligth from a memory requirement point of view). It implements an adaptation of the efficient tone mapping algorithm propose by David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
25 ** -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
26 **
27 **
28 ** License Agreement
29 ** For Open Source Computer Vision Library
30 **
31 ** Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
32 ** Copyright (C) 2008-2011, Willow Garage Inc., all rights reserved.
33 **
34 ** For Human Visual System tools (bioinspired)
35 ** Copyright (C) 2007-2011, LISTIC Lab, Annecy le Vieux and GIPSA Lab, Grenoble, France, all rights reserved.
36 **
37 ** Third party copyrights are property of their respective owners.
38 **
39 ** Redistribution and use in source and binary forms, with or without modification,
40 ** are permitted provided that the following conditions are met:
41 **
42 ** * Redistributions of source code must retain the above copyright notice,
43 ** this list of conditions and the following disclaimer.
44 **
45 ** * Redistributions in binary form must reproduce the above copyright notice,
46 ** this list of conditions and the following disclaimer in the documentation
47 ** and/or other materials provided with the distribution.
48 **
49 ** * The name of the copyright holders may not be used to endorse or promote products
50 ** derived from this software without specific prior written permission.
51 **
52 ** This software is provided by the copyright holders and contributors "as is" and
53 ** any express or implied warranties, including, but not limited to, the implied
54 ** warranties of merchantability and fitness for a particular purpose are disclaimed.
55 ** In no event shall the Intel Corporation or contributors be liable for any direct,
56 ** indirect, incidental, special, exemplary, or consequential damages
57 ** (including, but not limited to, procurement of substitute goods or services;
58 ** loss of use, data, or profits; or business interruption) however caused
59 ** and on any theory of liability, whether in contract, strict liability,
60 ** or tort (including negligence or otherwise) arising in any way out of
61 ** the use of this software, even if advised of the possibility of such damage.
62 *******************************************************************************/
63
64 /*
65 * retinafasttonemapping.cpp
66 *
67 * Created on: May 26, 2013
68 * Author: Alexandre Benoit
69 */
70
71 #include "precomp.hpp"
72 #include "basicretinafilter.hpp"
73 #include "retinacolor.hpp"
74 #include <cstdio>
75 #include <sstream>
76 #include <valarray>
77
78 namespace cv
79 {
80 namespace bioinspired
81 {
82 /**
83 * @class RetinaFastToneMappingImpl a wrapper class which allows the tone mapping algorithm of Meylan&al(2007) to be used with OpenCV.
84 * This algorithm is already implemented in thre Retina class (retina::applyFastToneMapping) but used it does not require all the retina model to be allocated. This allows a light memory use for low memory devices (smartphones, etc.
85 * As a summary, these are the model properties:
86 * => 2 stages of local luminance adaptation with a different local neighborhood for each.
87 * => first stage models the retina photorecetors local luminance adaptation
88 * => second stage models th ganglion cells local information adaptation
89 * => compared to the initial publication, this class uses spatio-temporal low pass filters instead of spatial only filters.
90 * ====> this can help noise robustness and temporal stability for video sequence use cases.
91 * for more information, read to the following papers :
92 * Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
93 * regarding spatio-temporal filter and the bigger retina model :
94 * Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.
95 */
96
97 class RetinaFastToneMappingImpl : public RetinaFastToneMapping
98 {
99 public:
100 /**
101 * constructor
102 * @param imageInput: the size of the images to process
103 */
RetinaFastToneMappingImpl(Size imageInput)104 RetinaFastToneMappingImpl(Size imageInput)
105 {
106 unsigned int nbPixels=imageInput.height*imageInput.width;
107
108 // basic error check
109 if (nbPixels <= 0)
110 throw cv::Exception(-1, "Bad retina size setup : size height and with must be superior to zero", "RetinaImpl::setup", "retinafasttonemapping.cpp", 0);
111
112 // resize buffers
113 _inputBuffer.resize(nbPixels*3); // buffer supports gray images but also 3 channels color buffers... (larger is better...)
114 _imageOutput.resize(nbPixels*3);
115 _temp2.resize(nbPixels);
116 // allocate the main filter with 2 setup sets properties (one for each low pass filter
117 _multiuseFilter = makePtr<BasicRetinaFilter>(imageInput.height, imageInput.width, 2);
118 // allocate the color manager (multiplexer/demultiplexer
119 _colorEngine = makePtr<RetinaColor>(imageInput.height, imageInput.width);
120 // setup filter behaviors with default values
121 setup();
122 }
123
124 /**
125 * basic destructor
126 */
~RetinaFastToneMappingImpl()127 virtual ~RetinaFastToneMappingImpl() { }
128
129 /**
130 * method that applies a luminance correction (initially High Dynamic Range (HDR) tone mapping) using only the 2 local adaptation stages of the retina parvocellular channel : photoreceptors level and ganlion cells level. Spatio temporal filtering is applied but limited to temporal smoothing and eventually high frequencies attenuation. This is a lighter method than the one available using the regular retina::run method. It is then faster but it does not include complete temporal filtering nor retina spectral whitening. Then, it can have a more limited effect on images with a very high dynamic range. This is an adptation of the original still image HDR tone mapping algorithm of David Alleyson, Sabine Susstruck and Laurence Meylan's work, please cite:
131 * -> Meylan L., Alleysson D., and Susstrunk S., A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of Optical Society of America, A, Vol. 24, N 9, September, 1st, 2007, pp. 2807-2816
132 @param inputImage the input image to process RGB or gray levels
133 @param outputToneMappedImage the output tone mapped image
134 */
applyFastToneMapping(InputArray inputImage,OutputArray outputToneMappedImage)135 virtual void applyFastToneMapping(InputArray inputImage, OutputArray outputToneMappedImage) CV_OVERRIDE
136 {
137 // first convert input image to the compatible format :
138 const bool colorMode = _convertCvMat2ValarrayBuffer(inputImage.getMat(), _inputBuffer);
139
140 // process tone mapping
141 if (colorMode)
142 {
143 _runRGBToneMapping(_inputBuffer, _imageOutput, true);
144 _convertValarrayBuffer2cvMat(_imageOutput, _multiuseFilter->getNBrows(), _multiuseFilter->getNBcolumns(), true, outputToneMappedImage);
145 }
146 else
147 {
148 _runGrayToneMapping(_inputBuffer, _imageOutput);
149 _convertValarrayBuffer2cvMat(_imageOutput, _multiuseFilter->getNBrows(), _multiuseFilter->getNBcolumns(), false, outputToneMappedImage);
150 }
151
152 }
153
154 /**
155 * setup method that updates tone mapping behaviors by adjusing the local luminance computation area
156 * @param photoreceptorsNeighborhoodRadius the first stage local adaptation area
157 * @param ganglioncellsNeighborhoodRadius the second stage local adaptation area
158 * @param meanLuminanceModulatorK the factor applied to modulate the meanLuminance information (default is 1, see reference paper)
159 */
setup(const float photoreceptorsNeighborhoodRadius=3.f,const float ganglioncellsNeighborhoodRadius=1.f,const float meanLuminanceModulatorK=1.f)160 virtual void setup(const float photoreceptorsNeighborhoodRadius=3.f, const float ganglioncellsNeighborhoodRadius=1.f, const float meanLuminanceModulatorK=1.f) CV_OVERRIDE
161 {
162 // setup the spatio-temporal properties of each filter
163 _meanLuminanceModulatorK = meanLuminanceModulatorK;
164 _multiuseFilter->setV0CompressionParameter(1.f, 255.f, 128.f);
165 _multiuseFilter->setLPfilterParameters(0.f, 0.f, photoreceptorsNeighborhoodRadius, 1);
166 _multiuseFilter->setLPfilterParameters(0.f, 0.f, ganglioncellsNeighborhoodRadius, 2);
167 }
168
169 private:
170 // a filter able to perform local adaptation and low pass spatio-temporal filtering
171 cv::Ptr <BasicRetinaFilter> _multiuseFilter;
172 cv::Ptr <RetinaColor> _colorEngine;
173
174 //!< buffer used to convert input cv::Mat to internal retina buffers format (valarrays)
175 std::valarray<float> _inputBuffer;
176 std::valarray<float> _imageOutput;
177 std::valarray<float> _temp2;
178 float _meanLuminanceModulatorK;
179
180
_convertValarrayBuffer2cvMat(const std::valarray<float> & grayMatrixToConvert,const unsigned int nbRows,const unsigned int nbColumns,const bool colorMode,OutputArray outBuffer)181 void _convertValarrayBuffer2cvMat(const std::valarray<float> &grayMatrixToConvert, const unsigned int nbRows, const unsigned int nbColumns, const bool colorMode, OutputArray outBuffer)
182 {
183 // fill output buffer with the valarray buffer
184 const float *valarrayPTR=get_data(grayMatrixToConvert);
185 if (!colorMode)
186 {
187 outBuffer.create(cv::Size(nbColumns, nbRows), CV_8U);
188 Mat outMat = outBuffer.getMat();
189 for (unsigned int i=0;i<nbRows;++i)
190 {
191 for (unsigned int j=0;j<nbColumns;++j)
192 {
193 cv::Point2d pixel(j,i);
194 outMat.at<unsigned char>(pixel)=(unsigned char)*(valarrayPTR++);
195 }
196 }
197 }
198 else
199 {
200 const unsigned int nbPixels=nbColumns*nbRows;
201 const unsigned int doubleNBpixels=nbColumns*nbRows*2;
202 outBuffer.create(cv::Size(nbColumns, nbRows), CV_8UC3);
203 Mat outMat = outBuffer.getMat();
204 for (unsigned int i=0;i<nbRows;++i)
205 {
206 for (unsigned int j=0;j<nbColumns;++j,++valarrayPTR)
207 {
208 cv::Point2d pixel(j,i);
209 cv::Vec3b pixelValues;
210 pixelValues[2]=(unsigned char)*(valarrayPTR);
211 pixelValues[1]=(unsigned char)*(valarrayPTR+nbPixels);
212 pixelValues[0]=(unsigned char)*(valarrayPTR+doubleNBpixels);
213
214 outMat.at<cv::Vec3b>(pixel)=pixelValues;
215 }
216 }
217 }
218 }
219
_convertCvMat2ValarrayBuffer(InputArray inputMat,std::valarray<float> & outputValarrayMatrix)220 bool _convertCvMat2ValarrayBuffer(InputArray inputMat, std::valarray<float> &outputValarrayMatrix)
221 {
222 const Mat inputMatToConvert=inputMat.getMat();
223 // first check input consistency
224 if (inputMatToConvert.empty())
225 throw cv::Exception(-1, "RetinaImpl cannot be applied, input buffer is empty", "RetinaImpl::run", "RetinaImpl.h", 0);
226
227 // retreive color mode from image input
228 int imageNumberOfChannels = inputMatToConvert.channels();
229
230 // convert to float AND fill the valarray buffer
231 typedef float T; // define here the target pixel format, here, float
232 const int dsttype = DataType<T>::depth; // output buffer is float format
233
234 const unsigned int nbPixels=inputMat.getMat().rows*inputMat.getMat().cols;
235 const unsigned int doubleNBpixels=inputMat.getMat().rows*inputMat.getMat().cols*2;
236
237 if(imageNumberOfChannels==4)
238 {
239 // create a cv::Mat table (for RGBA planes)
240 cv::Mat planes[4] =
241 {
242 cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[doubleNBpixels]),
243 cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[nbPixels]),
244 cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0])
245 };
246 planes[3] = cv::Mat(inputMatToConvert.size(), dsttype); // last channel (alpha) does not point on the valarray (not usefull in our case)
247 // split color cv::Mat in 4 planes... it fills valarray directely
248 cv::split(Mat_<Vec<T, 4> >(inputMatToConvert), planes);
249 }
250 else if (imageNumberOfChannels==3)
251 {
252 // create a cv::Mat table (for RGB planes)
253 cv::Mat planes[] =
254 {
255 cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[doubleNBpixels]),
256 cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[nbPixels]),
257 cv::Mat(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0])
258 };
259 // split color cv::Mat in 3 planes... it fills valarray directely
260 cv::split(cv::Mat_<Vec<T, 3> >(inputMatToConvert), planes);
261 }
262 else if(imageNumberOfChannels==1)
263 {
264 // create a cv::Mat header for the valarray
265 cv::Mat dst(inputMatToConvert.size(), dsttype, &outputValarrayMatrix[0]);
266 inputMatToConvert.convertTo(dst, dsttype);
267 }
268 else
269 CV_Error(Error::StsUnsupportedFormat, "input image must be single channel (gray levels), bgr format (color) or bgra (color with transparency which won't be considered");
270
271 return imageNumberOfChannels>1; // return bool : false for gray level image processing, true for color mode
272 }
273
274
275 // run the initilized retina filter in order to perform gray image tone mapping, after this call all retina outputs are updated
_runGrayToneMapping(const std::valarray<float> & grayImageInput,std::valarray<float> & grayImageOutput)276 void _runGrayToneMapping(const std::valarray<float> &grayImageInput, std::valarray<float> &grayImageOutput)
277 {
278 // apply tone mapping on the multiplexed image
279 // -> photoreceptors local adaptation (large area adaptation)
280 _multiuseFilter->runFilter_LPfilter(grayImageInput, grayImageOutput, 0); // compute low pass filtering modeling the horizontal cells filtering to acess local luminance
281 _multiuseFilter->setV0CompressionParameterToneMapping(1.f, grayImageOutput.max(), _meanLuminanceModulatorK*grayImageOutput.sum()/(float)_multiuseFilter->getNBpixels());
282 _multiuseFilter->runFilter_LocalAdapdation(grayImageInput, grayImageOutput, _temp2); // adapt contrast to local luminance
283
284 // -> ganglion cells local adaptation (short area adaptation)
285 _multiuseFilter->runFilter_LPfilter(_temp2, grayImageOutput, 1); // compute low pass filtering (high cut frequency (remove spatio-temporal noise)
286 _multiuseFilter->setV0CompressionParameterToneMapping(1.f, _temp2.max(), _meanLuminanceModulatorK*grayImageOutput.sum()/(float)_multiuseFilter->getNBpixels());
287 _multiuseFilter->runFilter_LocalAdapdation(_temp2, grayImageOutput, grayImageOutput); // adapt contrast to local luminance
288
289 }
290
291 // run the initilized retina filter in order to perform color tone mapping, after this call all retina outputs are updated
_runRGBToneMapping(const std::valarray<float> & RGBimageInput,std::valarray<float> & RGBimageOutput,const bool useAdaptiveFiltering)292 void _runRGBToneMapping(const std::valarray<float> &RGBimageInput, std::valarray<float> &RGBimageOutput, const bool useAdaptiveFiltering)
293 {
294 // multiplex the image with the color sampling method specified in the constructor
295 _colorEngine->runColorMultiplexing(RGBimageInput);
296
297 // apply tone mapping on the multiplexed image
298 _runGrayToneMapping(_colorEngine->getMultiplexedFrame(), RGBimageOutput);
299
300 // demultiplex tone maped image
301 _colorEngine->runColorDemultiplexing(RGBimageOutput, useAdaptiveFiltering, _multiuseFilter->getMaxInputValue());//_ColorEngine->getMultiplexedFrame());//_ParvoRetinaFilter->getPhotoreceptorsLPfilteringOutput());
302
303 // rescaling result between 0 and 255
304 _colorEngine->normalizeRGBOutput_0_maxOutputValue(255.0);
305
306 // return the result
307 RGBimageOutput=_colorEngine->getDemultiplexedColorFrame();
308 }
309
310 };
311
create(Size inputSize)312 Ptr<RetinaFastToneMapping> RetinaFastToneMapping::create(Size inputSize)
313 {
314 return makePtr<RetinaFastToneMappingImpl>(inputSize);
315 }
316
317 }// end of namespace bioinspired
318 }// end of namespace cv
319