1= OFX Programming Guide : Multi-Input Effects
2Author:Bruno Nicoletti
32014-10-17
4:toc:
5:data-uri:
6:source-highlighter: coderay
7
8This guide will take you through the basics of creating effects that can be used in more than
9one context, as well as how to make a multi-input effect.
10Its source can be found in the pass:[C++]
11file `Guide/Code/Example4/saturation.cpp`.
12This plugin takes an RGB or RGBA image and increases or descreases the saturation by a parameter. It can
13be used in two contexts, firstly as a simple filter, secondly as a general effect, where it has an optional
14second input clip which is used to control where the effect is applied.
15
16== Multiple Contexts, Why Bother?
17
18As briefly described in the first example, OFX has the concept of contexts that an effect can be used in.
19Our example is going to work in the filter context and the general context.
20
21The rules for a filter context are that it has to have one and only one input clip, called 'Source' and
22one and only one output clip called 'Output'.
23
24For a general context, you have to have a single mandated clip called 'Output' and that is it. You are
25free to have as many input clips as you need, name them how you feel and use choose how to set certain
26important properties of the output.
27
28Why would we want to do this? Because not all host applications behave the same way. For example an
29editing application will typically allow effects to be applied to clips on a timeline, and the effect
30can only take a single input when used like that. A complicated node-based compositor is less restrictive,
31its effect can typicall have any number of inputs and the rules for certain behaviours are relaxed.
32
33So you've written your OFX effect, and it can work with a single input, but would ideally work much better
34with multiple inputs. You also want it to work as best it can across a range of host applications. If you could
35only write it as a multi-input generall effect with more than one input, it couldn't work in an editor.
36However if you wrote it as a single input effect, it wouldn't work as well as it could in a node based
37compositor. Having your effect work in multiple contexts is the way to have it work as best as possible
38in both applications.
39
40In this way an OFX host application, which knows which contexts it can support, will inspect the contexts
41a plugin says it can be used it, and choose the most appropriate one for what it wants to do.
42
43This example plugin shows you how to do that.
44
45== Describing Our Plugin
46Our basic describe action is pretty much the same as all the other examples, but with one minor difference,
47we set two contexts in which the effect can be used in.
48
49.saturation.cpp
50----
51    // Define the image effects contexts we can be used in, in this case a filter
52    // and a general effect.
53    gPropertySuite->propSetString(effectProps,
54                                  kOfxImageEffectPropSupportedContexts,
55                                  0,
56                                  kOfxImageEffectContextFilter);
57
58    gPropertySuite->propSetString(effectProps,
59                                  kOfxImageEffectPropSupportedContexts,
60                                  1,
61                                  kOfxImageEffectContextGeneral);
62----
63
64The snippet above shows that the effect is saying it can be used in the filter and general contexts.
65
66Both of these have rules associated as to how the plugin behaves in that context. Because the filter context is so
67simple, most of the default behaviour just works and you don't have to trap many other actions.
68
69In the case of the general context, the default behaviour might not work the way you want, and you
70may have to trap other actions. Fortunately the defaults work for us as wll.
71
72[source, c++]
73.saturation.cpp
74----
75  ////////////////////////////////////////////////////////////////////////////////
76  //  describe the plugin in context
77  OfxStatus
78  DescribeInContextAction(OfxImageEffectHandle descriptor,
79                          OfxPropertySetHandle inArgs)
80  {
81    // get the context we are being described for
82    char *context;
83    gPropertySuite->propGetString(inArgs, kOfxImageEffectPropContext, 0, &context);
84
85    OfxPropertySetHandle props;
86    // define the mandated single output clip
87    gImageEffectSuite->clipDefine(descriptor, "Output", &props);
88
89    // set the component types we can handle on out output
90    gPropertySuite->propSetString(props,
91                                  kOfxImageEffectPropSupportedComponents,
92                                  0,
93                                  kOfxImageComponentRGBA);
94    gPropertySuite->propSetString(props,
95                                  kOfxImageEffectPropSupportedComponents,
96                                  1,
97                                  kOfxImageComponentRGB);
98
99    // define the mandated single source clip
100    gImageEffectSuite->clipDefine(descriptor, "Source", &props);
101
102    // set the component types we can handle on our main input
103    gPropertySuite->propSetString(props,
104                                  kOfxImageEffectPropSupportedComponents,
105                                  0,
106                                  kOfxImageComponentRGBA);
107    gPropertySuite->propSetString(props,
108                                  kOfxImageEffectPropSupportedComponents,
109                                  1,
110                                  kOfxImageComponentRGB);
111
112    if(strcmp(context, kOfxImageEffectContextGeneral) == 0) {
113      gImageEffectSuite->clipDefine(descriptor, "Mask", &props);
114
115      // set the component types we can handle on our main input
116      gPropertySuite->propSetString(props,
117                                    kOfxImageEffectPropSupportedComponents,
118                                    0,
119                                    kOfxImageComponentAlpha);
120      gPropertySuite->propSetInt(props,
121                                 kOfxImageClipPropOptional,
122                                 0,
123                                 1);
124      gPropertySuite->propSetInt(props,
125                                 kOfxImageClipPropIsMask,
126                                 0,
127                                 1);
128    }
129
130    ...
131    [SNIP]
132    ...
133
134    return kOfxStatOK;
135  }
136----
137I've snipped the simple parameter definition code out to save some space.
138
139Here we have the describe in context action. This will now be called once for
140each context that a host application wants to support. You know which contex
141you are being described in by the **kOfxImageEffectPropContext** property on
142inArgs.
143
144Regardless of the
145context, it describes two clips, "Source" and "Output", which will work
146fine both as a filter and a general context. Note that we won't support
147'alpha' on these two clips, we only support images that have colour components,
148as how can you saturate a single channel image?
149
150Finally, if the effect is in the general context, we describe a third clip
151and call it "Mask". We then tell the host about that clip...
152
153   - firstly, that we only want single component images from that clip
154   - secondly, that the clip is optional,
155   - thirdly, that this clip is to be interpretted as a mask, so hosts
156that manage such things separately, know it can be fed into this input.
157
158
159image::Pics/SaturationNuke.jpg[ role = "thumb", align=center, title=Saturation Example in Nuke]
160
161The image above shows our saturation example running inside Nuke. Nuke chose to instantiate
162the plugin as a general context effect, not a filter, as general contexts are the ones it
163prefers. You can see the
164graph, and our saturation node has two inputs, one for the mask and one for the
165source image. The control panel for the effect is also shown, with the saturation
166value set to zero. Note the extra "MaskChannel" param, which was not specified by the plugin.
167This was automatically generated by Nuke when it saw that the 'Mask' input to the
168effect was a single channel, so as to allow the user to chose which one to use as a mask.
169
170The result is an image whose desaturation amount is modulated by the alpha channel of the
171mask image, which in this case is a right to left ramp.
172
173== The Other Actions
174All the other actions should be fairly familiar and you should be able to reason
175them out pretty easily. The two that have any significant differences because of
176the multi context use are the create instance action and the render action.
177
178=== Create Instance
179This is pretty familiar, though we have a slight change to handle the mask input.
180
181[source, c++]
182.saturation.cpp
183----
184  ////////////////////////////////////////////////////////////////////////////////
185  /// instance construction
186  OfxStatus CreateInstanceAction( OfxImageEffectHandle instance)
187  {
188    OfxPropertySetHandle effectProps;
189    gImageEffectSuite->getPropertySet(instance, &effectProps);
190
191    // To avoid continual lookup, put our handles into our instance
192    // data, those handles are guaranteed to be valid for the duration
193    // of the instance.
194    MyInstanceData *myData = new MyInstanceData;
195
196    // Set my private instance data
197    gPropertySuite->propSetPointer(effectProps, kOfxPropInstanceData, 0, (void *) myData);
198
199    // is this instance made for the general context?
200    char *context = 0;
201    gPropertySuite->propGetString(effectProps, kOfxImageEffectPropContext, 0,  &context);
202    myData->isGeneralContext = context &&
203                               (strcmp(context, kOfxImageEffectContextGeneral) == 0);
204
205    // Cache the source and output clip handles
206    gImageEffectSuite->clipGetHandle(instance, "Source", &myData->sourceClip, 0);
207    gImageEffectSuite->clipGetHandle(instance, "Output", &myData->outputClip, 0);
208
209    if(myData->isGeneralContext) {
210      gImageEffectSuite->clipGetHandle(instance, "Mask", &myData->maskClip, 0);
211    }
212
213    // Cache away the param handles
214    OfxParamSetHandle paramSet;
215    gImageEffectSuite->getParamSet(instance, &paramSet);
216    gParameterSuite->paramGetHandle(paramSet,
217                                    SATURATION_PARAM_NAME,
218                                    &myData->saturationParam,
219                                    0);
220
221    return kOfxStatOK;
222  }
223----
224
225We are again using instance data to cache away a set of handles to clips and params (the constructor of which
226sets them all to NULL). We
227are also recording which context we have had our instance created for by checking the **kOfxImageEffectPropContext**
228property of the efect. If it is a general context we also cache the 'Mask' input in our instance data. Pretty easy.
229
230=== Rendering
231Becase we are now using a class to wrap up OFX images (see <<A Bit Of Housekeeping, below>>) the render code is a bit
232tidier but is pretty much still the same really. The major difference is that we are now fetching a third image, for
233the mask image, and we are prepared for this to fail and keep going as we may be in the filter context, or we may be
234in the general context but the clip is not connected.
235
236[source, c++]
237.saturation.cpp
238----
239  // Render an output image
240  OfxStatus RenderAction( OfxImageEffectHandle instance,
241                          OfxPropertySetHandle inArgs,
242                          OfxPropertySetHandle outArgs)
243  {
244    // get the render window and the time from the inArgs
245    OfxTime time;
246    OfxRectI renderWindow;
247    OfxStatus status = kOfxStatOK;
248
249    gPropertySuite->propGetDouble(inArgs,
250                                  kOfxPropTime,
251                                  0,
252                                  &time);
253    gPropertySuite->propGetIntN(inArgs,
254                                kOfxImageEffectPropRenderWindow,
255                                4,
256                                &renderWindow.x1);
257
258    // get our instance data which has out clip and param handles
259    MyInstanceData *myData = FetchInstanceData(instance);
260
261    // get our param values
262    double saturation = 1.0;
263    gParameterSuite->paramGetValueAtTime(myData->saturationParam, time, &saturation);
264
265    // the property sets holding our images
266    OfxPropertySetHandle outputImg = NULL, sourceImg = NULL, maskImg = NULL;
267    try {
268      // fetch image to render into from that clip
269      Image outputImg(myData->outputClip, time);
270      if(!outputImg) {
271        throw " no output image!";
272      }
273
274      // fetch image to render into from that clip
275      Image sourceImg(myData->sourceClip, time);
276      if(!sourceImg) {
277        throw " no source image!";
278      }
279
280      // fetch mask image at render time from that clip, it may not be there
281      // as we might in the filter context or it might not be attached as it
282      // is optional, so don't worry if we don't have one.
283      Image maskImg(myData->maskClip, time);
284
285      // now do our render depending on the data type
286      if(outputImg.bytesPerComponent() == 1) {
287        PixelProcessing<unsigned char, 255>(saturation,
288                                            instance,
289                                            sourceImg,
290                                            maskImg,
291                                            outputImg,
292                                            renderWindow);
293      }
294      else if(outputImg.bytesPerComponent() == 2) {
295        PixelProcessing<unsigned short, 65535>(saturation,
296                                               instance,
297                                               sourceImg,
298                                               maskImg,
299                                               outputImg,
300                                               renderWindow);
301      }
302      else if(outputImg.bytesPerComponent() == 4) {
303        PixelProcessing<float, 1>(saturation,
304                                  instance,
305                                  sourceImg,
306                                  maskImg,
307                                  outputImg,
308                                  renderWindow);
309      }
310      else {
311        throw " bad data type!";
312        throw 1;
313      }
314
315    }
316    catch(const char *errStr ) {
317      bool isAborting = gImageEffectSuite->abort(instance);
318
319      // if we were interrupted, the failed fetch is fine, just return kOfxStatOK
320      // otherwise, something wierd happened
321      if(!isAborting) {
322        status = kOfxStatFailed;
323      }
324      ERROR_IF(!isAborting, " Rendering failed because %s", errStr);
325    }
326
327    // all was well
328    return status;
329  }
330----
331
332The actual pixel processing code does the standard saturation calculation on each
333pixel, scaling each of R, G and B around their common average. The tweak we add is
334to modulate the amount of the effect by looking at the pixel values of the mask
335input if we have one. Again this is not meant to be fast code, just illustrative.
336
337[source, c++]
338.saturation.cpp
339----
340  ////////////////////////////////////////////////////////////////////////////////
341  // iterate over our pixels and process them
342  template <class T, int MAX>
343  void PixelProcessing(double saturation,
344                       OfxImageEffectHandle instance,
345                       Image &src,
346                       Image &mask,
347                       Image &output,
348                       OfxRectI renderWindow)
349  {
350    int nComps = output.nComponents();
351
352    // and do some processing
353    for(int y = renderWindow.y1; y < renderWindow.y2; y++) {
354      if(y % 20 == 0 && gImageEffectSuite->abort(instance)) break;
355
356      // get the row start for the output image
357      T *dstPix = output.pixelAddress<T>(renderWindow.x1, y);
358
359      for(int x = renderWindow.x1; x < renderWindow.x2; x++) {
360
361        // get the source pixel
362        T *srcPix = src.pixelAddress<T>(x, y);
363
364        // get the amount to mask by, no mask image means we do the full effect everywhere
365        float maskAmount = 1.0f;
366        if (mask) {
367          // get our mask pixel address
368          T *maskPix = mask.pixelAddress<T>(x, y);
369          if(maskPix) {
370            maskAmount = float(*maskPix)/float(MAX);
371          }
372          else {
373            maskAmount = 0;
374          }
375        }
376
377        if(srcPix) {
378          if(maskAmount == 0) {
379            // we have a mask input, but the mask is zero here,
380            // so no effect happens, copy source to output
381            for(int i = 0; i < nComps; ++i) {
382              *dstPix = *srcPix;
383              ++dstPix; ++srcPix;
384            }
385          }
386          else {
387            // we have a non zero mask or no mask at all
388
389            // find the average of the R, G and B
390            float average = (srcPix[0] + srcPix[1] + srcPix[2])/3.0f;
391
392            // scale each component around that average
393            for(int c = 0; c < 3; ++c) {
394              float value = (srcPix[c] - average) * saturation + average;
395              if(MAX != 1) {
396                value = Clamp<T, MAX>(value);
397              }
398              // use the mask to control how much original we should have
399              dstPix[c] = Blend(srcPix[c], value, maskAmount);
400            }
401
402            if(nComps == 4) { // if we have an alpha, just copy it
403              dstPix[3] = srcPix[3];
404            }
405            dstPix += 4;
406          }
407        }
408        else {
409          // we don't have a pixel in the source image, set output to zero
410          for(int i = 0; i < nComps; ++i) {
411            *dstPix = 0;
412            ++dstPix;
413          }
414        }
415      }
416    }
417  }
418----
419
420== A Bit Of Houskeeping
421You may have noticed I've gone and created an `**Image**` class. I got
422bored of passing around various pointers and bounds and strides in my code and
423decided to tidy it up.
424
425[source, c++]
426.saturation.cpp
427----
428  ////////////////////////////////////////////////////////////////////////////////
429  // class to manage OFX images
430  class Image {
431  public    :
432    // construct from a property set that represents the image
433    Image(OfxPropertySetHandle propSet);
434
435    // construct from a clip by fetching an image at the given frame
436    Image(OfxImageClipHandle clip, double frame);
437
438    // destructor
439    ~Image();
440
441    // get a pixel address, cast to the right type
442    template <class T>
443    T *pixelAddress(int x, int y)
444    {
445      return reinterpret_cast<T *>(rawAddress(x, y));
446    }
447
448    // Is this image empty?
449    operator bool()
450    {
451      return propSet_ != NULL && dataPtr_ != NULL;
452    }
453
454    // bytes per component, 1, 2 or 4 for byte, short and float images
455    int bytesPerComponent() const { return bytesPerComponent_; }
456
457    // number of components
458    int nComponents() const { return nComponents_; }
459
460  protected :
461    void construct();
462
463    // Look up a pixel address in the image. returns null if the pixel was not
464    // in the bounds of the image
465    void *rawAddress(int x, int y);
466
467    OfxPropertySetHandle propSet_;
468    int rowBytes_;
469    OfxRectI bounds_;
470    char *dataPtr_;
471    int nComponents_;
472    int bytesPerComponent_;
473    int bytesPerPixel_;
474  };
475----
476
477It takes an OfxPropertySetHandle and pulls all the bits it needs out of that into a class. It uses
478all the same pixel access logic as in example 2.
479Ideally I should put this in a library which our example links to, but I'm keeping all
480the code for each example in one source file for
481illustrative purposes. Feel free to steal this and use it in your own code footnote:[provided
482you stick to the conditions listed at the top of source file].
483
484== Summary
485This plugin has shown you
486  - the basics of working with multiple contexts,
487  - how to handle optional input clips,
488  - restricting pixel types on input and output clips.
489
490
491