1.. SPDX-License-Identifier: GPL-2.0
2
3i.MX Video Capture Driver
4=========================
5
6Introduction
7------------
8
9The Freescale i.MX5/6 contains an Image Processing Unit (IPU), which
10handles the flow of image frames to and from capture devices and
11display devices.
12
13For image capture, the IPU contains the following internal subunits:
14
15- Image DMA Controller (IDMAC)
16- Camera Serial Interface (CSI)
17- Image Converter (IC)
18- Sensor Multi-FIFO Controller (SMFC)
19- Image Rotator (IRT)
20- Video De-Interlacing or Combining Block (VDIC)
21
22The IDMAC is the DMA controller for transfer of image frames to and from
23memory. Various dedicated DMA channels exist for both video capture and
24display paths. During transfer, the IDMAC is also capable of vertical
25image flip, 8x8 block transfer (see IRT description), pixel component
26re-ordering (for example UYVY to YUYV) within the same colorspace, and
27packed <--> planar conversion. The IDMAC can also perform a simple
28de-interlacing by interweaving even and odd lines during transfer
29(without motion compensation which requires the VDIC).
30
31The CSI is the backend capture unit that interfaces directly with
32camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 buses.
33
34The IC handles color-space conversion, resizing (downscaling and
35upscaling), horizontal flip, and 90/270 degree rotation operations.
36
37There are three independent "tasks" within the IC that can carry out
38conversions concurrently: pre-process encoding, pre-process viewfinder,
39and post-processing. Within each task, conversions are split into three
40sections: downsizing section, main section (upsizing, flip, colorspace
41conversion, and graphics plane combining), and rotation section.
42
43The IPU time-shares the IC task operations. The time-slice granularity
44is one burst of eight pixels in the downsizing section, one image line
45in the main processing section, one image frame in the rotation section.
46
47The SMFC is composed of four independent FIFOs that each can transfer
48captured frames from sensors directly to memory concurrently via four
49IDMAC channels.
50
51The IRT carries out 90 and 270 degree image rotation operations. The
52rotation operation is carried out on 8x8 pixel blocks at a time. This
53operation is supported by the IDMAC which handles the 8x8 block transfer
54along with block reordering, in coordination with vertical flip.
55
56The VDIC handles the conversion of interlaced video to progressive, with
57support for different motion compensation modes (low, medium, and high
58motion). The deinterlaced output frames from the VDIC can be sent to the
59IC pre-process viewfinder task for further conversions. The VDIC also
60contains a Combiner that combines two image planes, with alpha blending
61and color keying.
62
63In addition to the IPU internal subunits, there are also two units
64outside the IPU that are also involved in video capture on i.MX:
65
66- MIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus
67  interface. This is a Synopsys DesignWare core.
68- Two video multiplexers for selecting among multiple sensor inputs
69  to send to a CSI.
70
71For more info, refer to the latest versions of the i.MX5/6 reference
72manuals [#f1]_ and [#f2]_.
73
74
75Features
76--------
77
78Some of the features of this driver include:
79
80- Many different pipelines can be configured via media controller API,
81  that correspond to the hardware video capture pipelines supported in
82  the i.MX.
83
84- Supports parallel, BT.565, and MIPI CSI-2 interfaces.
85
86- Concurrent independent streams, by configuring pipelines to multiple
87  video capture interfaces using independent entities.
88
89- Scaling, color-space conversion, horizontal and vertical flip, and
90  image rotation via IC task subdevs.
91
92- Many pixel formats supported (RGB, packed and planar YUV, partial
93  planar YUV).
94
95- The VDIC subdev supports motion compensated de-interlacing, with three
96  motion compensation modes: low, medium, and high motion. Pipelines are
97  defined that allow sending frames to the VDIC subdev directly from the
98  CSI. There is also support in the future for sending frames to the
99  VDIC from memory buffers via a output/mem2mem devices.
100
101- Includes a Frame Interval Monitor (FIM) that can correct vertical sync
102  problems with the ADV718x video decoders.
103
104
105Topology
106--------
107
108The following shows the media topologies for the i.MX6Q SabreSD and
109i.MX6Q SabreAuto. Refer to these diagrams in the entity descriptions
110in the next section.
111
112The i.MX5/6 topologies can differ upstream from the IPUv3 CSI video
113multiplexers, but the internal IPUv3 topology downstream from there
114is common to all i.MX5/6 platforms. For example, the SabreSD, with the
115MIPI CSI-2 OV5640 sensor, requires the i.MX6 MIPI CSI-2 receiver. But
116the SabreAuto has only the ADV7180 decoder on a parallel bt.656 bus, and
117therefore does not require the MIPI CSI-2 receiver, so it is missing in
118its graph.
119
120.. _imx6q_topology_graph:
121
122.. kernel-figure:: imx6q-sabresd.dot
123    :alt:   Diagram of the i.MX6Q SabreSD media pipeline topology
124    :align: center
125
126    Media pipeline graph on i.MX6Q SabreSD
127
128.. kernel-figure:: imx6q-sabreauto.dot
129    :alt:   Diagram of the i.MX6Q SabreAuto media pipeline topology
130    :align: center
131
132    Media pipeline graph on i.MX6Q SabreAuto
133
134Entities
135--------
136
137imx6-mipi-csi2
138--------------
139
140This is the MIPI CSI-2 receiver entity. It has one sink pad to receive
141the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has
142four source pads, corresponding to the four MIPI CSI-2 demuxed virtual
143channel outputs. Multiple source pads can be enabled to independently
144stream from multiple virtual channels.
145
146This entity actually consists of two sub-blocks. One is the MIPI CSI-2
147core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block
148is a "CSI-2 to IPU gasket". The gasket acts as a demultiplexer of the
149four virtual channels streams, providing four separate parallel buses
150containing each virtual channel that are routed to CSIs or video
151multiplexers as described below.
152
153On i.MX6 solo/dual-lite, all four virtual channel buses are routed to
154two video multiplexers. Both CSI0 and CSI1 can receive any virtual
155channel, as selected by the video multiplexers.
156
157On i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected
158by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1
159and IPU2-CSI0, respectively, and virtual channel 3 is routed to
160IPU2-CSI1 (again selected by a video mux).
161
162ipuX_csiY_mux
163-------------
164
165These are the video multiplexers. They have two or more sink pads to
166select from either camera sensors with a parallel interface, or from
167MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a
168single source pad that routes to a CSI (ipuX_csiY entities).
169
170On i.MX6 solo/dual-lite, there are two video mux entities. One sits
171in front of IPU1-CSI0 to select between a parallel sensor and any of
172the four MIPI CSI-2 virtual channels (a total of five sink pads). The
173other mux sits in front of IPU1-CSI1, and again has five sink pads to
174select between a parallel sensor and any of the four MIPI CSI-2 virtual
175channels.
176
177On i.MX6 Quad, there are two video mux entities. One sits in front of
178IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual
179channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to
180select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two
181sink pads).
182
183ipuX_csiY
184---------
185
186These are the CSI entities. They have a single sink pad receiving from
187either a video mux or from a MIPI CSI-2 virtual channel as described
188above.
189
190This entity has two source pads. The first source pad can link directly
191to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links
192that require no IDMAC memory buffer transfer.
193
194When the direct source pad is routed to the ipuX_ic_prp entity, frames
195from the CSI can be processed by one or both of the IC pre-processing
196tasks.
197
198When the direct source pad is routed to the ipuX_vdic entity, the VDIC
199will carry out motion-compensated de-interlace using "high motion" mode
200(see description of ipuX_vdic entity).
201
202The second source pad sends video frames directly to memory buffers
203via the SMFC and an IDMAC channel, bypassing IC pre-processing. This
204source pad is routed to a capture device node, with a node name of the
205format "ipuX_csiY capture".
206
207Note that since the IDMAC source pad makes use of an IDMAC channel,
208pixel reordering within the same colorspace can be carried out by the
209IDMAC channel. For example, if the CSI sink pad is receiving in UYVY
210order, the capture device linked to the IDMAC source pad can capture
211in YUYV order. Also, if the CSI sink pad is receiving a packed YUV
212format, the capture device can capture a planar YUV format such as
213YUV420.
214
215The IDMAC channel at the IDMAC source pad also supports simple
216interweave without motion compensation, which is activated if the source
217pad's field type is sequential top-bottom or bottom-top, and the
218requested capture interface field type is set to interlaced (t-b, b-t,
219or unqualified interlaced). The capture interface will enforce the same
220field order as the source pad field order (interlaced-bt if source pad
221is seq-bt, interlaced-tb if source pad is seq-tb).
222
223For events produced by ipuX_csiY, see ref:`imx_api_ipuX_csiY`.
224
225Cropping in ipuX_csiY
226---------------------
227
228The CSI supports cropping the incoming raw sensor frames. This is
229implemented in the ipuX_csiY entities at the sink pad, using the
230crop selection subdev API.
231
232The CSI also supports fixed divide-by-two downscaling independently in
233width and height. This is implemented in the ipuX_csiY entities at
234the sink pad, using the compose selection subdev API.
235
236The output rectangle at the ipuX_csiY source pad is the same as
237the compose rectangle at the sink pad. So the source pad rectangle
238cannot be negotiated, it must be set using the compose selection
239API at sink pad (if /2 downscale is desired, otherwise source pad
240rectangle is equal to incoming rectangle).
241
242To give an example of crop and /2 downscale, this will crop a
2431280x960 input frame to 640x480, and then /2 downscale in both
244dimensions to 320x240 (assumes ipu1_csi0 is linked to ipu1_csi0_mux):
245
246.. code-block:: none
247
248   media-ctl -V "'ipu1_csi0_mux':2[fmt:UYVY2X8/1280x960]"
249   media-ctl -V "'ipu1_csi0':0[crop:(0,0)/640x480]"
250   media-ctl -V "'ipu1_csi0':0[compose:(0,0)/320x240]"
251
252Frame Skipping in ipuX_csiY
253---------------------------
254
255The CSI supports frame rate decimation, via frame skipping. Frame
256rate decimation is specified by setting the frame intervals at
257sink and source pads. The ipuX_csiY entity then applies the best
258frame skip setting to the CSI to achieve the desired frame rate
259at the source pad.
260
261The following example reduces an assumed incoming 60 Hz frame
262rate by half at the IDMAC output source pad:
263
264.. code-block:: none
265
266   media-ctl -V "'ipu1_csi0':0[fmt:UYVY2X8/640x480@1/60]"
267   media-ctl -V "'ipu1_csi0':2[fmt:UYVY2X8/640x480@1/30]"
268
269Frame Interval Monitor in ipuX_csiY
270-----------------------------------
271
272See ref:`imx_api_FIM`.
273
274ipuX_vdic
275---------
276
277The VDIC carries out motion compensated de-interlacing, with three
278motion compensation modes: low, medium, and high motion. The mode is
279specified with the menu control V4L2_CID_DEINTERLACING_MODE. The VDIC
280has two sink pads and a single source pad.
281
282The direct sink pad receives from an ipuX_csiY direct pad. With this
283link the VDIC can only operate in high motion mode.
284
285When the IDMAC sink pad is activated, it receives from an output
286or mem2mem device node. With this pipeline, the VDIC can also operate
287in low and medium modes, because these modes require receiving
288frames from memory buffers. Note that an output or mem2mem device
289is not implemented yet, so this sink pad currently has no links.
290
291The source pad routes to the IC pre-processing entity ipuX_ic_prp.
292
293ipuX_ic_prp
294-----------
295
296This is the IC pre-processing entity. It acts as a router, routing
297data from its sink pad to one or both of its source pads.
298
299This entity has a single sink pad. The sink pad can receive from the
300ipuX_csiY direct pad, or from ipuX_vdic.
301
302This entity has two source pads. One source pad routes to the
303pre-process encode task entity (ipuX_ic_prpenc), the other to the
304pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads
305can be activated at the same time if the sink pad is receiving from
306ipuX_csiY. Only the source pad to the pre-process viewfinder task entity
307can be activated if the sink pad is receiving from ipuX_vdic (frames
308from the VDIC can only be processed by the pre-process viewfinder task).
309
310ipuX_ic_prpenc
311--------------
312
313This is the IC pre-processing encode entity. It has a single sink
314pad from ipuX_ic_prp, and a single source pad. The source pad is
315routed to a capture device node, with a node name of the format
316"ipuX_ic_prpenc capture".
317
318This entity performs the IC pre-process encode task operations:
319color-space conversion, resizing (downscaling and upscaling),
320horizontal and vertical flip, and 90/270 degree rotation. Flip
321and rotation are provided via standard V4L2 controls.
322
323Like the ipuX_csiY IDMAC source, this entity also supports simple
324de-interlace without motion compensation, and pixel reordering.
325
326ipuX_ic_prpvf
327-------------
328
329This is the IC pre-processing viewfinder entity. It has a single sink
330pad from ipuX_ic_prp, and a single source pad. The source pad is routed
331to a capture device node, with a node name of the format
332"ipuX_ic_prpvf capture".
333
334This entity is identical in operation to ipuX_ic_prpenc, with the same
335resizing and CSC operations and flip/rotation controls. It will receive
336and process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is
337receiving from ipuX_vdic.
338
339Like the ipuX_csiY IDMAC source, this entity supports simple
340interweaving without motion compensation. However, note that if the
341ipuX_vdic is included in the pipeline (ipuX_ic_prp is receiving from
342ipuX_vdic), it's not possible to use interweave in ipuX_ic_prpvf,
343since the ipuX_vdic has already carried out de-interlacing (with
344motion compensation) and therefore the field type output from
345ipuX_vdic can only be none (progressive).
346
347Capture Pipelines
348-----------------
349
350The following describe the various use-cases supported by the pipelines.
351
352The links shown do not include the backend sensor, video mux, or mipi
353csi-2 receiver links. This depends on the type of sensor interface
354(parallel or mipi csi-2). So these pipelines begin with:
355
356sensor -> ipuX_csiY_mux -> ...
357
358for parallel sensors, or:
359
360sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ...
361
362for mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route
363to the video mux (ipuX_csiY_mux) before sending to the CSI, depending
364on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in
365parenthesis.
366
367Unprocessed Video Capture:
368--------------------------
369
370Send frames directly from sensor to camera device interface node, with
371no conversions, via ipuX_csiY IDMAC source pad:
372
373-> ipuX_csiY:2 -> ipuX_csiY capture
374
375IC Direct Conversions:
376----------------------
377
378This pipeline uses the preprocess encode entity to route frames directly
379from the CSI to the IC, to carry out scaling up to 1024x1024 resolution,
380CSC, flipping, and image rotation:
381
382-> ipuX_csiY:1 -> 0:ipuX_ic_prp:1 -> 0:ipuX_ic_prpenc:1 -> ipuX_ic_prpenc capture
383
384Motion Compensated De-interlace:
385--------------------------------
386
387This pipeline routes frames from the CSI direct pad to the VDIC entity to
388support motion-compensated de-interlacing (high motion mode only),
389scaling up to 1024x1024, CSC, flip, and rotation:
390
391-> ipuX_csiY:1 -> 0:ipuX_vdic:2 -> 0:ipuX_ic_prp:2 -> 0:ipuX_ic_prpvf:1 -> ipuX_ic_prpvf capture
392
393
394Usage Notes
395-----------
396
397To aid in configuration and for backward compatibility with V4L2
398applications that access controls only from video device nodes, the
399capture device interfaces inherit controls from the active entities
400in the current pipeline, so controls can be accessed either directly
401from the subdev or from the active capture device interface. For
402example, the FIM controls are available either from the ipuX_csiY
403subdevs or from the active capture device.
404
405The following are specific usage notes for the Sabre* reference
406boards:
407
408
409i.MX6Q SabreLite with OV5642 and OV5640
410---------------------------------------
411
412This platform requires the OmniVision OV5642 module with a parallel
413camera interface, and the OV5640 module with a MIPI CSI-2
414interface. Both modules are available from Boundary Devices:
415
416- https://boundarydevices.com/product/nit6x_5mp
417- https://boundarydevices.com/product/nit6x_5mp_mipi
418
419Note that if only one camera module is available, the other sensor
420node can be disabled in the device tree.
421
422The OV5642 module is connected to the parallel bus input on the i.MX
423internal video mux to IPU1 CSI0. It's i2c bus connects to i2c bus 2.
424
425The MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2
426receiver, and the four virtual channel outputs from the receiver are
427routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1,
428vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is
429also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and
430OV5640 must not share the same i2c slave address.
431
432The following basic example configures unprocessed video capture
433pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and
434the OV5640, transmitting on MIPI CSI-2 virtual channel 1 (which is
435imx6-mipi-csi2 pad 2), is routed to ipu1_csi1. Both sensors are
436configured to output 640x480, and the OV5642 outputs YUYV2X8, the
437OV5640 UYVY2X8:
438
439.. code-block:: none
440
441   # Setup links for OV5642
442   media-ctl -l "'ov5642 1-0042':0 -> 'ipu1_csi0_mux':1[1]"
443   media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
444   media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
445   # Setup links for OV5640
446   media-ctl -l "'ov5640 1-0040':0 -> 'imx6-mipi-csi2':0[1]"
447   media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]"
448   media-ctl -l "'ipu1_csi1':2 -> 'ipu1_csi1 capture':0[1]"
449   # Configure pads for OV5642 pipeline
450   media-ctl -V "'ov5642 1-0042':0 [fmt:YUYV2X8/640x480 field:none]"
451   media-ctl -V "'ipu1_csi0_mux':2 [fmt:YUYV2X8/640x480 field:none]"
452   media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/640x480 field:none]"
453   # Configure pads for OV5640 pipeline
454   media-ctl -V "'ov5640 1-0040':0 [fmt:UYVY2X8/640x480 field:none]"
455   media-ctl -V "'imx6-mipi-csi2':2 [fmt:UYVY2X8/640x480 field:none]"
456   media-ctl -V "'ipu1_csi1':2 [fmt:AYUV32/640x480 field:none]"
457
458Streaming can then begin independently on the capture device nodes
459"ipu1_csi0 capture" and "ipu1_csi1 capture". The v4l2-ctl tool can
460be used to select any supported YUV pixelformat on the capture device
461nodes, including planar.
462
463i.MX6Q SabreAuto with ADV7180 decoder
464-------------------------------------
465
466On the i.MX6Q SabreAuto, an on-board ADV7180 SD decoder is connected to the
467parallel bus input on the internal video mux to IPU1 CSI0.
468
469The following example configures a pipeline to capture from the ADV7180
470video decoder, assuming NTSC 720x480 input signals, using simple
471interweave (unconverted and without motion compensation). The adv7180
472must output sequential or alternating fields (field type 'seq-bt' for
473NTSC, or 'alternate'):
474
475.. code-block:: none
476
477   # Setup links
478   media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]"
479   media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
480   media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
481   # Configure pads
482   media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]"
483   media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480]"
484   media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]"
485   # Configure "ipu1_csi0 capture" interface (assumed at /dev/video4)
486   v4l2-ctl -d4 --set-fmt-video=field=interlaced_bt
487
488Streaming can then begin on /dev/video4. The v4l2-ctl tool can also be
489used to select any supported YUV pixelformat on /dev/video4.
490
491This example configures a pipeline to capture from the ADV7180
492video decoder, assuming PAL 720x576 input signals, with Motion
493Compensated de-interlacing. The adv7180 must output sequential or
494alternating fields (field type 'seq-tb' for PAL, or 'alternate').
495
496.. code-block:: none
497
498   # Setup links
499   media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]"
500   media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
501   media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]"
502   media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]"
503   media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]"
504   media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]"
505   # Configure pads
506   media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]"
507   media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x576]"
508   media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]"
509   media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]"
510   media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]"
511   media-ctl -V "'ipu1_ic_prpvf':1 [fmt:AYUV32/720x576 field:none]"
512   # Configure "ipu1_ic_prpvf capture" interface (assumed at /dev/video2)
513   v4l2-ctl -d2 --set-fmt-video=field=none
514
515Streaming can then begin on /dev/video2. The v4l2-ctl tool can also be
516used to select any supported YUV pixelformat on /dev/video2.
517
518This platform accepts Composite Video analog inputs to the ADV7180 on
519Ain1 (connector J42).
520
521i.MX6DL SabreAuto with ADV7180 decoder
522--------------------------------------
523
524On the i.MX6DL SabreAuto, an on-board ADV7180 SD decoder is connected to the
525parallel bus input on the internal video mux to IPU1 CSI0.
526
527The following example configures a pipeline to capture from the ADV7180
528video decoder, assuming NTSC 720x480 input signals, using simple
529interweave (unconverted and without motion compensation). The adv7180
530must output sequential or alternating fields (field type 'seq-bt' for
531NTSC, or 'alternate'):
532
533.. code-block:: none
534
535   # Setup links
536   media-ctl -l "'adv7180 4-0021':0 -> 'ipu1_csi0_mux':4[1]"
537   media-ctl -l "'ipu1_csi0_mux':5 -> 'ipu1_csi0':0[1]"
538   media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
539   # Configure pads
540   media-ctl -V "'adv7180 4-0021':0 [fmt:UYVY2X8/720x480 field:seq-bt]"
541   media-ctl -V "'ipu1_csi0_mux':5 [fmt:UYVY2X8/720x480]"
542   media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/720x480]"
543   # Configure "ipu1_csi0 capture" interface (assumed at /dev/video0)
544   v4l2-ctl -d0 --set-fmt-video=field=interlaced_bt
545
546Streaming can then begin on /dev/video0. The v4l2-ctl tool can also be
547used to select any supported YUV pixelformat on /dev/video0.
548
549This example configures a pipeline to capture from the ADV7180
550video decoder, assuming PAL 720x576 input signals, with Motion
551Compensated de-interlacing. The adv7180 must output sequential or
552alternating fields (field type 'seq-tb' for PAL, or 'alternate').
553
554.. code-block:: none
555
556   # Setup links
557   media-ctl -l "'adv7180 4-0021':0 -> 'ipu1_csi0_mux':4[1]"
558   media-ctl -l "'ipu1_csi0_mux':5 -> 'ipu1_csi0':0[1]"
559   media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]"
560   media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]"
561   media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]"
562   media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]"
563   # Configure pads
564   media-ctl -V "'adv7180 4-0021':0 [fmt:UYVY2X8/720x576 field:seq-tb]"
565   media-ctl -V "'ipu1_csi0_mux':5 [fmt:UYVY2X8/720x576]"
566   media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x576]"
567   media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x576 field:none]"
568   media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x576 field:none]"
569   media-ctl -V "'ipu1_ic_prpvf':1 [fmt:AYUV32/720x576 field:none]"
570   # Configure "ipu1_ic_prpvf capture" interface (assumed at /dev/video2)
571   v4l2-ctl -d2 --set-fmt-video=field=none
572
573Streaming can then begin on /dev/video2. The v4l2-ctl tool can also be
574used to select any supported YUV pixelformat on /dev/video2.
575
576This platform accepts Composite Video analog inputs to the ADV7180 on
577Ain1 (connector J42).
578
579i.MX6Q SabreSD with MIPI CSI-2 OV5640
580-------------------------------------
581
582Similarly to i.MX6Q SabreLite, the i.MX6Q SabreSD supports a parallel
583interface OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640
584module. The OV5642 connects to i2c bus 1 and the OV5640 to i2c bus 2.
585
586The device tree for SabreSD includes OF graphs for both the parallel
587OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI
588CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled.
589The OV5640 module connects to MIPI connector J5. The NXP part number
590for the OV5640 module that connects to the SabreSD board is H120729.
591
592The following example configures unprocessed video capture pipeline to
593capture from the OV5640, transmitting on MIPI CSI-2 virtual channel 0:
594
595.. code-block:: none
596
597   # Setup links
598   media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]"
599   media-ctl -l "'imx6-mipi-csi2':1 -> 'ipu1_csi0_mux':0[1]"
600   media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
601   media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]"
602   # Configure pads
603   media-ctl -V "'ov5640 1-003c':0 [fmt:UYVY2X8/640x480]"
604   media-ctl -V "'imx6-mipi-csi2':1 [fmt:UYVY2X8/640x480]"
605   media-ctl -V "'ipu1_csi0_mux':0 [fmt:UYVY2X8/640x480]"
606   media-ctl -V "'ipu1_csi0':0 [fmt:AYUV32/640x480]"
607
608Streaming can then begin on "ipu1_csi0 capture" node. The v4l2-ctl
609tool can be used to select any supported pixelformat on the capture
610device node.
611
612To determine what is the /dev/video node correspondent to
613"ipu1_csi0 capture":
614
615.. code-block:: none
616
617   media-ctl -e "ipu1_csi0 capture"
618   /dev/video0
619
620/dev/video0 is the streaming element in this case.
621
622Starting the streaming via v4l2-ctl:
623
624.. code-block:: none
625
626   v4l2-ctl --stream-mmap -d /dev/video0
627
628Starting the streaming via Gstreamer and sending the content to the display:
629
630.. code-block:: none
631
632   gst-launch-1.0 v4l2src device=/dev/video0 ! kmssink
633
634The following example configures a direct conversion pipeline to capture
635from the OV5640, transmitting on MIPI CSI-2 virtual channel 0. It also
636shows colorspace conversion and scaling at IC output.
637
638.. code-block:: none
639
640   # Setup links
641   media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]"
642   media-ctl -l "'imx6-mipi-csi2':1 -> 'ipu1_csi0_mux':0[1]"
643   media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]"
644   media-ctl -l "'ipu1_csi0':1 -> 'ipu1_ic_prp':0[1]"
645   media-ctl -l "'ipu1_ic_prp':1 -> 'ipu1_ic_prpenc':0[1]"
646   media-ctl -l "'ipu1_ic_prpenc':1 -> 'ipu1_ic_prpenc capture':0[1]"
647   # Configure pads
648   media-ctl -V "'ov5640 1-003c':0 [fmt:UYVY2X8/640x480]"
649   media-ctl -V "'imx6-mipi-csi2':1 [fmt:UYVY2X8/640x480]"
650   media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/640x480]"
651   media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/640x480]"
652   media-ctl -V "'ipu1_ic_prp':1 [fmt:AYUV32/640x480]"
653   media-ctl -V "'ipu1_ic_prpenc':1 [fmt:ARGB8888_1X32/800x600]"
654   # Set a format at the capture interface
655   v4l2-ctl -d /dev/video1 --set-fmt-video=pixelformat=RGB3
656
657Streaming can then begin on "ipu1_ic_prpenc capture" node.
658
659To determine what is the /dev/video node correspondent to
660"ipu1_ic_prpenc capture":
661
662.. code-block:: none
663
664   media-ctl -e "ipu1_ic_prpenc capture"
665   /dev/video1
666
667
668/dev/video1 is the streaming element in this case.
669
670Starting the streaming via v4l2-ctl:
671
672.. code-block:: none
673
674   v4l2-ctl --stream-mmap -d /dev/video1
675
676Starting the streaming via Gstreamer and sending the content to the display:
677
678.. code-block:: none
679
680   gst-launch-1.0 v4l2src device=/dev/video1 ! kmssink
681
682Known Issues
683------------
684
6851. When using 90 or 270 degree rotation control at capture resolutions
686   near the IC resizer limit of 1024x1024, and combined with planar
687   pixel formats (YUV420, YUV422p), frame capture will often fail with
688   no end-of-frame interrupts from the IDMAC channel. To work around
689   this, use lower resolution and/or packed formats (YUYV, RGB3, etc.)
690   when 90 or 270 rotations are needed.
691
692
693File list
694---------
695
696drivers/staging/media/imx/
697include/media/imx.h
698include/linux/imx-media.h
699
700References
701----------
702
703.. [#f1] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6DQRM.pdf
704.. [#f2] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6SDLRM.pdf
705
706
707Authors
708-------
709
710- Steve Longerbeam <steve_longerbeam@mentor.com>
711- Philipp Zabel <kernel@pengutronix.de>
712- Russell King <linux@armlinux.org.uk>
713
714Copyright (C) 2012-2017 Mentor Graphics Inc.
715