1IJG JPEG LIBRARY:  SYSTEM ARCHITECTURE
2
3Copyright (C) 1991-1995, Thomas G. Lane.
4This file is part of the Independent JPEG Group's software.
5For conditions of distribution and use, see the accompanying README file.
6
7
8This file provides an overview of the architecture of the IJG JPEG software;
9that is, the functions of the various modules in the system and the interfaces
10between modules.  For more precise details about any data structure or calling
11convention, see the include files and comments in the source code.
12
13We assume that the reader is already somewhat familiar with the JPEG standard.
14The README file includes references for learning about JPEG.  The file
15libjpeg.doc describes the library from the viewpoint of an application
16programmer using the library; it's best to read that file before this one.
17Also, the file coderules.doc describes the coding style conventions we use.
18
19In this document, JPEG-specific terminology follows the JPEG standard:
20  A "component" means a color channel, e.g., Red or Luminance.
21  A "sample" is a single component value (i.e., one number in the image data).
22  A "coefficient" is a frequency coefficient (a DCT transform output number).
23  A "block" is an 8x8 group of samples or coefficients.
24  A "data unit" is an abstract data type which is either a block for lossy
25	(DCT-based) codecs or a sample for lossless (predictive) codecs.
26  An "MCU" (minimum coded unit) is an interleaved set of data units of size
27	determined by the sampling factors, or a single data unit in a
28	noninterleaved scan.
29We do not use the terms "pixel" and "sample" interchangeably.  When we say
30pixel, we mean an element of the full-size image, while a sample is an element
31of the downsampled image.  Thus the number of samples may vary across
32components while the number of pixels does not.  (This terminology is not used
33rigorously throughout the code, but it is used in places where confusion would
34otherwise result.)
35
36
37*** System features ***
38
39The IJG distribution contains two parts:
40  * A subroutine library for JPEG compression and decompression.
41  * cjpeg/djpeg, two sample applications that use the library to transform
42    JFIF JPEG files to and from several other image formats.
43cjpeg/djpeg are of no great intellectual complexity: they merely add a simple
44command-line user interface and I/O routines for several uncompressed image
45formats.  This document concentrates on the library itself.
46
47We desire the library to be capable of supporting all JPEG baseline, extended
48sequential, and progressive DCT processes, as well as the lossless (spatial)
49process.  Hierarchical processes are not supported.
50
51Within these limits, any set of compression parameters allowed by the JPEG
52spec should be readable for decompression.  (We can be more restrictive about
53what formats we can generate.)  Although the system design allows for all
54parameter values, some uncommon settings are not yet implemented and may
55never be; nonintegral sampling ratios are the prime example.  Furthermore,
56we treat 8-bit vs. 12-bit data precision as a compile-time switch, not a
57run-time option, because most machines can store 8-bit pixels much more
58compactly than 12-bit.
59
60For legal reasons, JPEG arithmetic coding is not currently supported, but
61extending the library to include it would be straightforward.
62
63By itself, the library handles only interchange JPEG datastreams --- in
64particular the widely used JFIF file format.  The library can be used by
65surrounding code to process interchange or abbreviated JPEG datastreams that
66are embedded in more complex file formats.  (For example, libtiff uses this
67library to implement JPEG compression within the TIFF file format.)
68
69The library includes a substantial amount of code that is not covered by the
70JPEG standard but is necessary for typical applications of JPEG.  These
71functions preprocess the image before JPEG compression or postprocess it after
72decompression.  They include colorspace conversion, downsampling/upsampling,
73and color quantization.  This code can be omitted if not needed.
74
75A wide range of quality vs. speed tradeoffs are possible in JPEG processing,
76and even more so in decompression postprocessing.  The decompression library
77provides multiple implementations that cover most of the useful tradeoffs,
78ranging from very-high-quality down to fast-preview operation.  On the
79compression side we have generally not provided low-quality choices, since
80compression is normally less time-critical.  It should be understood that the
81low-quality modes may not meet the JPEG standard's accuracy requirements;
82nonetheless, they are useful for viewers.
83
84
85*** Portability issues ***
86
87Portability is an essential requirement for the library.  The key portability
88issues that show up at the level of system architecture are:
89
901.  Memory usage.  We want the code to be able to run on PC-class machines
91with limited memory.  Images should therefore be processed sequentially (in
92strips), to avoid holding the whole image in memory at once.  Where a
93full-image buffer is necessary, we should be able to use either virtual memory
94or temporary files.
95
962.  Near/far pointer distinction.  To run efficiently on 80x86 machines, the
97code should distinguish "small" objects (kept in near data space) from
98"large" ones (kept in far data space).  This is an annoying restriction, but
99fortunately it does not impact code quality for less brain-damaged machines,
100and the source code clutter turns out to be minimal with sufficient use of
101pointer typedefs.
102
1033. Data precision.  We assume that "char" is at least 8 bits, "short" and
104"int" at least 16, "long" at least 32.  The code will work fine with larger
105data sizes, although memory may be used inefficiently in some cases.  However,
106the JPEG compressed datastream must ultimately appear on external storage as a
107sequence of 8-bit bytes if it is to conform to the standard.  This may pose a
108problem on machines where char is wider than 8 bits.  The library represents
109compressed data as an array of values of typedef JOCTET.  If no data type
110exactly 8 bits wide is available, custom data source and data destination
111modules must be written to unpack and pack the chosen JOCTET datatype into
1128-bit external representation.
113
114
115*** System overview ***
116
117The compressor and decompressor are each divided into two main sections:
118the JPEG compressor or decompressor proper, and the preprocessing or
119postprocessing functions.  The interface between these two sections is the
120image data that the official JPEG spec regards as its input or output: this
121data is in the colorspace to be used for compression, and it is downsampled
122to the sampling factors to be used.  The preprocessing and postprocessing
123steps are responsible for converting a normal image representation to or from
124this form.  (Those few applications that want to deal with YCbCr downsampled
125data can skip the preprocessing or postprocessing step.)
126
127Looking more closely, the compressor library contains the following main
128elements:
129
130  Preprocessing:
131    * Color space conversion (e.g., RGB to YCbCr).
132    * Edge expansion and downsampling.  Optionally, this step can do simple
133      smoothing --- this is often helpful for low-quality source data.
134  Lossy JPEG proper:
135    * MCU assembly, DCT, quantization.
136    * Entropy coding (sequential or progressive, Huffman or arithmetic).
137  Lossless JPEG proper:
138    * Point transform.
139    * Prediction, differencing.
140    * Entropy coding (Huffman or arithmetic)
141
142In addition to these modules we need overall control, marker generation,
143and support code (memory management & error handling).  There is also a
144module responsible for physically writing the output data --- typically
145this is just an interface to fwrite(), but some applications may need to
146do something else with the data.
147
148The decompressor library contains the following main elements:
149
150  Lossy JPEG proper:
151    * Entropy decoding (sequential or progressive, Huffman or arithmetic).
152    * Dequantization, inverse DCT, MCU disassembly.
153  Lossless JPEG proper:
154    * Entropy decoding (Huffman or arithmetic).
155    * Prediction, undifferencing.
156    * Point transform, sample size scaling.
157  Postprocessing:
158    * Upsampling.  Optionally, this step may be able to do more general
159      rescaling of the image.
160    * Color space conversion (e.g., YCbCr to RGB).  This step may also
161      provide gamma adjustment [ currently it does not ].
162    * Optional color quantization (e.g., reduction to 256 colors).
163    * Optional color precision reduction (e.g., 24-bit to 15-bit color).
164      [This feature is not currently implemented.]
165
166We also need overall control, marker parsing, and a data source module.
167The support code (memory management & error handling) can be shared with
168the compression half of the library.
169
170There may be several implementations of each of these elements, particularly
171in the decompressor, where a wide range of speed/quality tradeoffs is very
172useful.  It must be understood that some of the best speedups involve
173merging adjacent steps in the pipeline.  For example, upsampling, color space
174conversion, and color quantization might all be done at once when using a
175low-quality ordered-dither technique.  The system architecture is designed to
176allow such merging where appropriate.
177
178
179Note: it is convenient to regard edge expansion (padding to block boundaries)
180as a preprocessing/postprocessing function, even though the JPEG spec includes
181it in compression/decompression.  We do this because downsampling/upsampling
182can be simplified a little if they work on padded data: it's not necessary to
183have special cases at the right and bottom edges.  Therefore the interface
184buffer is always an integral number of blocks wide and high, and we expect
185compression preprocessing to pad the source data properly.  Padding will occur
186only to the next block (8-sample) boundary.  In an interleaved-scan situation,
187additional dummy blocks may be used to fill out MCUs, but the MCU assembly and
188disassembly logic will create or discard these blocks internally.  (This is
189advantageous for speed reasons, since we avoid DCTing the dummy blocks.
190It also permits a small reduction in file size, because the compressor can
191choose dummy block contents so as to minimize their size in compressed form.
192Finally, it makes the interface buffer specification independent of whether
193the file is actually interleaved or not.)  Applications that wish to deal
194directly with the downsampled data must provide similar buffering and padding
195for odd-sized images.
196
197
198*** Poor man's object-oriented programming ***
199
200It should be clear by now that we have a lot of quasi-independent processing
201steps, many of which have several possible behaviors.  To avoid cluttering the
202code with lots of switch statements, we use a simple form of object-style
203programming to separate out the different possibilities.
204
205For example, two different color quantization algorithms could be implemented
206as two separate modules that present the same external interface; at runtime,
207the calling code will access the proper module indirectly through an "object".
208
209We can get the limited features we need while staying within portable C.
210The basic tool is a function pointer.  An "object" is just a struct
211containing one or more function pointer fields, each of which corresponds to
212a method name in real object-oriented languages.  During initialization we
213fill in the function pointers with references to whichever module we have
214determined we need to use in this run.  Then invocation of the module is done
215by indirecting through a function pointer; on most machines this is no more
216expensive than a switch statement, which would be the only other way of
217making the required run-time choice.  The really significant benefit, of
218course, is keeping the source code clean and well structured.
219
220We can also arrange to have private storage that varies between different
221implementations of the same kind of object.  We do this by making all the
222module-specific object structs be separately allocated entities, which will
223be accessed via pointers in the master compression or decompression struct.
224The "public" fields or methods for a given kind of object are specified by
225a commonly known struct.  But a module's initialization code can allocate
226a larger struct that contains the common struct as its first member, plus
227additional private fields.  With appropriate pointer casting, the module's
228internal functions can access these private fields.  (For a simple example,
229see jdatadst.c, which implements the external interface specified by struct
230jpeg_destination_mgr, but adds extra fields.)
231
232(Of course this would all be a lot easier if we were using C++, but we are
233not yet prepared to assume that everyone has a C++ compiler.)
234
235An important benefit of this scheme is that it is easy to provide multiple
236versions of any method, each tuned to a particular case.  While a lot of
237precalculation might be done to select an optimal implementation of a method,
238the cost per invocation is constant.  For example, the upsampling step might
239have a "generic" method, plus one or more "hardwired" methods for the most
240popular sampling factors; the hardwired methods would be faster because they'd
241use straight-line code instead of for-loops.  The cost to determine which
242method to use is paid only once, at startup, and the selection criteria are
243hidden from the callers of the method.
244
245This plan differs a little bit from usual object-oriented structures, in that
246only one instance of each object class will exist during execution.  The
247reason for having the class structure is that on different runs we may create
248different instances (choose to execute different modules).  You can think of
249the term "method" as denoting the common interface presented by a particular
250set of interchangeable functions, and "object" as denoting a group of related
251methods, or the total shared interface behavior of a group of modules.
252
253
254*** Overall control structure ***
255
256We previously mentioned the need for overall control logic in the compression
257and decompression libraries.  In IJG implementations prior to v5, overall
258control was mostly provided by "pipeline control" modules, which proved to be
259large, unwieldy, and hard to understand.  To improve the situation, the
260control logic has been subdivided into multiple modules.  The control modules
261consist of:
262
2631. Master control for module selection and initialization.  This has two
264responsibilities:
265
266   1A.  Startup initialization at the beginning of image processing.
267        The individual processing modules to be used in this run are selected
268        and given initialization calls.
269
270   1B.  Per-pass control.  This determines how many passes will be performed
271        and calls each active processing module to configure itself
272        appropriately at the beginning of each pass.  End-of-pass processing,
273	where necessary, is also invoked from the master control module.
274
275   Method selection is partially distributed, in that a particular processing
276   module may contain several possible implementations of a particular method,
277   which it will select among when given its initialization call.  The master
278   control code need only be concerned with decisions that affect more than
279   one module.
280
2812. Data buffering control.  A separate control module exists for each
282   inter-processing-step data buffer.  This module is responsible for
283   invoking the processing steps that write or read that data buffer.
284
285Each buffer controller sees the world as follows:
286
287input data => processing step A => buffer => processing step B => output data
288                      |              |               |
289              ------------------ controller ------------------
290
291The controller knows the dataflow requirements of steps A and B: how much data
292they want to accept in one chunk and how much they output in one chunk.  Its
293function is to manage its buffer and call A and B at the proper times.
294
295A data buffer control module may itself be viewed as a processing step by a
296higher-level control module; thus the control modules form a binary tree with
297elementary processing steps at the leaves of the tree.
298
299The control modules are objects.  A considerable amount of flexibility can
300be had by replacing implementations of a control module.  For example:
301* Merging of adjacent steps in the pipeline is done by replacing a control
302  module and its pair of processing-step modules with a single processing-
303  step module.  (Hence the possible merges are determined by the tree of
304  control modules.)
305* In some processing modes, a given interstep buffer need only be a "strip"
306  buffer large enough to accommodate the desired data chunk sizes.  In other
307  modes, a full-image buffer is needed and several passes are required.
308  The control module determines which kind of buffer is used and manipulates
309  virtual array buffers as needed.  One or both processing steps may be
310  unaware of the multi-pass behavior.
311
312In theory, we might be able to make all of the data buffer controllers
313interchangeable and provide just one set of implementations for all.  In
314practice, each one contains considerable special-case processing for its
315particular job.  The buffer controller concept should be regarded as an
316overall system structuring principle, not as a complete description of the
317task performed by any one controller.
318
319
320*** Codec object structure ***
321
322As noted above, this library supports both the lossy (DCT-based) and lossless
323JPEG processes.  Because these processes have little in common with one another
324(and their implementations share very little code), we need to provide a way to
325isloate the underlying JPEG process from the rest of the library.  This is
326accomplished by introducing an abstract "codec object" which acts a generic
327interface to the JPEG (de)compressor proper.
328
329Using the power of the object-oriented scheme described above, we build the
330lossy and lossless modules as two separate implementations of the codec object.
331Switching between lossy and lossless processes then becomes as trivial as
332assigning the appropriate method pointers during initialization of the library.
333
334
335*** Compression object structure ***
336
337Here is a sketch of the logical structure of the JPEG compression library:
338
339                                                 |-- Colorspace conversion
340                  |-- Preprocessing controller --|
341                  |                              |-- Downsampling
342                  |
343Main controller --|
344                  |                       /--> Lossy codec
345                  |                      /
346                  |-- Compression codec <          *OR*
347                                         \
348                                          \--> Lossless codec
349
350
351where the lossy codec looks like:
352
353                             |-- Forward DCT, quantize
354<-- Coefficient controller --|
355                             |-- Entropy encoding
356
357
358and the lossless codec looks like:
359
360                            |-- Point transformation
361                            |
362<-- Difference controller --|-- Prediction, differencing
363                            |
364                            |-- Lossless entropy encoding
365
366
367This sketch also describes the flow of control (subroutine calls) during
368typical image data processing.  Each of the components shown in the diagram is
369an "object" which may have several different implementations available.  One
370or more source code files contain the actual implementation(s) of each object.
371
372The objects shown above are:
373
374* Main controller: buffer controller for the subsampled-data buffer, which
375  holds the preprocessed input data.  This controller invokes preprocessing to
376  fill the subsampled-data buffer, and JPEG compression to empty it.  There is
377  usually no need for a full-image buffer here; a strip buffer is adequate.
378
379* Preprocessing controller: buffer controller for the downsampling input data
380  buffer, which lies between colorspace conversion and downsampling.  Note
381  that a unified conversion/downsampling module would probably replace this
382  controller entirely.
383
384* Colorspace conversion: converts application image data into the desired
385  JPEG color space; also changes the data from pixel-interleaved layout to
386  separate component planes.  Processes one pixel row at a time.
387
388* Downsampling: performs reduction of chroma components as required.
389  Optionally may perform pixel-level smoothing as well.  Processes a "row
390  group" at a time, where a row group is defined as Vmax pixel rows of each
391  component before downsampling, and Vk sample rows afterwards (remember Vk
392  differs across components).  Some downsampling or smoothing algorithms may
393  require context rows above and below the current row group; the
394  preprocessing controller is responsible for supplying these rows via proper
395  buffering.  The downsampler is responsible for edge expansion at the right
396  edge (i.e., extending each sample row to a multiple of 8 samples); but the
397  preprocessing controller is responsible for vertical edge expansion (i.e.,
398  duplicating the bottom sample row as needed to make a multiple of 8 rows).
399
400* Coefficient controller: buffer controller for the DCT-coefficient data.
401  This controller handles MCU assembly, including insertion of dummy DCT
402  blocks when needed at the right or bottom edge.  When performing
403  Huffman-code optimization or emitting a multiscan JPEG file, this
404  controller is responsible for buffering the full image.  The equivalent of
405  one fully interleaved MCU row of subsampled data is processed per call,
406  even when the JPEG file is noninterleaved.
407
408* Forward DCT and quantization: Perform DCT, quantize, and emit coefficients.
409  Works on one or more DCT blocks at a time.  (Note: the coefficients are now
410  emitted in normal array order, which the entropy encoder is expected to
411  convert to zigzag order as necessary.  Prior versions of the IJG code did
412  the conversion to zigzag order within the quantization step.)
413
414* Entropy encoding: Perform Huffman or arithmetic entropy coding and emit the
415  coded data to the data destination module.  Works on one MCU per call.
416  For progressive JPEG, the same DCT blocks are fed to the entropy coder
417  during each pass, and the coder must emit the appropriate subset of
418  coefficients.
419
420* Difference controller: buffer controller for the spatial difference data.
421  When emitting a multiscan JPEG file, this controller is responsible for
422  buffering the full image.  The equivalent of one fully interleaved MCU row
423  of subsampled data is processed per call, even when the JPEG file is
424  noninterleaved.
425
426* Point transformation: Scale the data down by the point transformation
427  parameter.
428
429* Prediction and differencing: Calculate the predictor and subtract it
430  from the input.  Works on one scanline per call.  The difference
431  controller supplies the prior scanline which is used for prediction.
432
433* Lossless entropy encoding: Perform Huffman or arithmetic entropy coding and
434  emit the coded data to the data destination module.  This module handles MCU
435  assembly.  Works on one MCU-row per call.
436
437In addition to the above objects, the compression library includes these
438objects:
439
440* Master control: determines the number of passes required, controls overall
441  and per-pass initialization of the other modules.
442
443* Marker writing: generates JPEG markers (except for RSTn, which is emitted
444  by the entropy encoder when needed).
445
446* Data destination manager: writes the output JPEG datastream to its final
447  destination (e.g., a file).  The destination manager supplied with the
448  library knows how to write to a stdio stream; for other behaviors, the
449  surrounding application may provide its own destination manager.
450
451* Memory manager: allocates and releases memory, controls virtual arrays
452  (with backing store management, where required).
453
454* Error handler: performs formatting and output of error and trace messages;
455  determines handling of nonfatal errors.  The surrounding application may
456  override some or all of this object's methods to change error handling.
457
458* Progress monitor: supports output of "percent-done" progress reports.
459  This object represents an optional callback to the surrounding application:
460  if wanted, it must be supplied by the application.
461
462The error handler, destination manager, and progress monitor objects are
463defined as separate objects in order to simplify application-specific
464customization of the JPEG library.  A surrounding application may override
465individual methods or supply its own all-new implementation of one of these
466objects.  The object interfaces for these objects are therefore treated as
467part of the application interface of the library, whereas the other objects
468are internal to the library.
469
470The error handler and memory manager are shared by JPEG compression and
471decompression; the progress monitor, if used, may be shared as well.
472
473
474*** Decompression object structure ***
475
476Here is a sketch of the logical structure of the JPEG decompression library:
477
478                                            /--> Lossy codec
479                                           /
480                  |-- Decompression codec <          *OR*
481                  |                        \
482                  |                         \--> Lossless codec
483Main controller --|
484                  |
485                  |                               |-- Upsampling
486                  |-- Postprocessing controller --|   |-- Colorspace conversion
487                                                  |-- Color quantization
488                                                  |-- Color precision reduction
489
490
491where the lossy codec looks like:
492
493                             |-- Entropy decoding
494<-- Coefficient controller --|
495                             |-- Dequantize, Inverse DCT
496
497
498and the lossless codec looks like:
499
500                            |-- Lossless entropy decoding
501                            |
502<-- Difference controller --|-- Prediction, undifferencing
503                            |
504                            |-- Point transformation, sample size scaling
505
506
507As before, this diagram also represents typical control flow.  The objects
508shown are:
509
510* Main controller: buffer controller for the subsampled-data buffer, which
511  holds the output of JPEG decompression proper.  This controller's primary
512  task is to feed the postprocessing procedure.  Some upsampling algorithms
513  may require context rows above and below the current row group; when this
514  is true, the main controller is responsible for managing its buffer so as
515  to make context rows available.  In the current design, the main buffer is
516  always a strip buffer; a full-image buffer is never required.
517
518* Coefficient controller: buffer controller for the DCT-coefficient data.
519  This controller handles MCU disassembly, including deletion of any dummy
520  DCT blocks at the right or bottom edge.  When reading a multiscan JPEG
521  file, this controller is responsible for buffering the full image.
522  (Buffering DCT coefficients, rather than samples, is necessary to support
523  progressive JPEG.)  The equivalent of one fully interleaved MCU row of
524  subsampled data is processed per call, even when the source JPEG file is
525  noninterleaved.
526
527* Entropy decoding: Read coded data from the data source module and perform
528  Huffman or arithmetic entropy decoding.  Works on one MCU per call.
529  For progressive JPEG decoding, the coefficient controller supplies the prior
530  coefficients of each MCU (initially all zeroes), which the entropy decoder
531  modifies in each scan.
532
533* Dequantization and inverse DCT: like it says.  Note that the coefficients
534  buffered by the coefficient controller have NOT been dequantized; we
535  merge dequantization and inverse DCT into a single step for speed reasons.
536  When scaled-down output is asked for, simplified DCT algorithms may be used
537  that emit only 1x1, 2x2, or 4x4 samples per DCT block, not the full 8x8.
538  Works on one DCT block at a time.
539
540* Difference controller: buffer controller for the spatial difference data.
541  When reading a multiscan JPEG file, this controller is responsible for
542  buffering the full image. The equivalent of one fully interleaved MCU row
543  is processed per call, even when the source JPEG file is noninterleaved.
544
545* Lossless entropy decoding: Read coded data from the data source module and
546  perform Huffman or arithmetic entropy decoding.  Works on one MCU-row per
547  call.
548
549* Prediction and undifferencing: Calculate the predictor and add it to the
550  decoded difference.  Works on one scanline per call.  The difference
551  controller supplies the prior scanline which is used for prediction.
552
553* Point transform and sample size scaling: Scale the data up by the point
554  transformation parameter and scale it down to fit into the compiled-in
555  sample size.
556
557* Postprocessing controller: buffer controller for the color quantization
558  input buffer, when quantization is in use.  (Without quantization, this
559  controller just calls the upsampler.)  For two-pass quantization, this
560  controller is responsible for buffering the full-image data.
561
562* Upsampling: restores chroma components to full size.  (May support more
563  general output rescaling, too.  Note that if undersized DCT outputs have
564  been emitted by the DCT module, this module must adjust so that properly
565  sized outputs are created.)  Works on one row group at a time.  This module
566  also calls the color conversion module, so its top level is effectively a
567  buffer controller for the upsampling->color conversion buffer.  However, in
568  all but the highest-quality operating modes, upsampling and color
569  conversion are likely to be merged into a single step.
570
571* Colorspace conversion: convert from JPEG color space to output color space,
572  and change data layout from separate component planes to pixel-interleaved.
573  Works on one pixel row at a time.
574
575* Color quantization: reduce the data to colormapped form, using either an
576  externally specified colormap or an internally generated one.  This module
577  is not used for full-color output.  Works on one pixel row at a time; may
578  require two passes to generate a color map.  Note that the output will
579  always be a single component representing colormap indexes.  In the current
580  design, the output values are JSAMPLEs, so an 8-bit compilation cannot
581  quantize to more than 256 colors.  This is unlikely to be a problem in
582  practice.
583
584* Color reduction: this module handles color precision reduction, e.g.,
585  generating 15-bit color (5 bits/primary) from JPEG's 24-bit output.
586  Not quite clear yet how this should be handled... should we merge it with
587  colorspace conversion???
588
589Note that some high-speed operating modes might condense the entire
590postprocessing sequence to a single module (upsample, color convert, and
591quantize in one step).
592
593In addition to the above objects, the decompression library includes these
594objects:
595
596* Master control: determines the number of passes required, controls overall
597  and per-pass initialization of the other modules.  This is subdivided into
598  input and output control: jdinput.c controls only input-side processing,
599  while jdmaster.c handles overall initialization and output-side control.
600
601* Marker reading: decodes JPEG markers (except for RSTn).
602
603* Data source manager: supplies the input JPEG datastream.  The source
604  manager supplied with the library knows how to read from a stdio stream;
605  for other behaviors, the surrounding application may provide its own source
606  manager.
607
608* Memory manager: same as for compression library.
609
610* Error handler: same as for compression library.
611
612* Progress monitor: same as for compression library.
613
614As with compression, the data source manager, error handler, and progress
615monitor are candidates for replacement by a surrounding application.
616
617
618*** Decompression input and output separation ***
619
620To support efficient incremental display of progressive JPEG files, the
621decompressor is divided into two sections that can run independently:
622
6231. Data input includes marker parsing, entropy decoding, and input into the
624   coefficient controller's DCT coefficient buffer.  Note that this
625   processing is relatively cheap and fast.
626
6272. Data output reads from the DCT coefficient buffer and performs the IDCT
628   and all postprocessing steps.
629
630For a progressive JPEG file, the data input processing is allowed to get
631arbitrarily far ahead of the data output processing.  (This occurs only
632if the application calls jpeg_consume_input(); otherwise input and output
633run in lockstep, since the input section is called only when the output
634section needs more data.)  In this way the application can avoid making
635extra display passes when data is arriving faster than the display pass
636can run.  Furthermore, it is possible to abort an output pass without
637losing anything, since the coefficient buffer is read-only as far as the
638output section is concerned.  See libjpeg.doc for more detail.
639
640A full-image coefficient array is only created if the JPEG file has multiple
641scans (or if the application specifies buffered-image mode anyway).  When
642reading a single-scan file, the coefficient controller normally creates only
643a one-MCU buffer, so input and output processing must run in lockstep in this
644case.  jpeg_consume_input() is effectively a no-op in this situation.
645
646The main impact of dividing the decompressor in this fashion is that we must
647be very careful with shared variables in the cinfo data structure.  Each
648variable that can change during the course of decompression must be
649classified as belonging to data input or data output, and each section must
650look only at its own variables.  For example, the data output section may not
651depend on any of the variables that describe the current scan in the JPEG
652file, because these may change as the data input section advances into a new
653scan.
654
655The progress monitor is (somewhat arbitrarily) defined to treat input of the
656file as one pass when buffered-image mode is not used, and to ignore data
657input work completely when buffered-image mode is used.  Note that the
658library has no reliable way to predict the number of passes when dealing
659with a progressive JPEG file, nor can it predict the number of output passes
660in buffered-image mode.  So the work estimate is inherently bogus anyway.
661
662No comparable division is currently made in the compression library, because
663there isn't any real need for it.
664
665
666*** Data formats ***
667
668Arrays of pixel sample values use the following data structure:
669
670    typedef something JSAMPLE;		a pixel component value, 0..MAXJSAMPLE
671    typedef JSAMPLE *JSAMPROW;		ptr to a row of samples
672    typedef JSAMPROW *JSAMPARRAY;	ptr to a list of rows
673    typedef JSAMPARRAY *JSAMPIMAGE;	ptr to a list of color-component arrays
674
675The basic element type JSAMPLE will typically be one of unsigned char,
676(signed) char, or short.  Short will be used if samples wider than 8 bits are
677to be supported (this is a compile-time option).  Otherwise, unsigned char is
678used if possible.  If the compiler only supports signed chars, then it is
679necessary to mask off the value when reading.  Thus, all reads of JSAMPLE
680values must be coded as "GETJSAMPLE(value)", where the macro will be defined
681as "((value) & 0xFF)" on signed-char machines and "((int) (value))" elsewhere.
682
683With these conventions, JSAMPLE values can be assumed to be >= 0.  This helps
684simplify correct rounding during downsampling, etc.  The JPEG standard's
685specification that sample values run from -128..127 is accommodated by
686subtracting 128 just as the sample value is copied into the source array for
687the DCT step (this will be an array of signed ints).  Similarly, during
688decompression the output of the IDCT step will be immediately shifted back to
6890..255.  (NB: different values are required when 12-bit samples are in use.
690The code is written in terms of MAXJSAMPLE and CENTERJSAMPLE, which will be
691defined as 255 and 128 respectively in an 8-bit implementation, and as 4095
692and 2048 in a 12-bit implementation.)
693
694We use a pointer per row, rather than a two-dimensional JSAMPLE array.  This
695choice costs only a small amount of memory and has several benefits:
696* Code using the data structure doesn't need to know the allocated width of
697  the rows.  This simplifies edge expansion/compression, since we can work
698  in an array that's wider than the logical picture width.
699* Indexing doesn't require multiplication; this is a performance win on many
700  machines.
701* Arrays with more than 64K total elements can be supported even on machines
702  where malloc() cannot allocate chunks larger than 64K.
703* The rows forming a component array may be allocated at different times
704  without extra copying.  This trick allows some speedups in smoothing steps
705  that need access to the previous and next rows.
706
707Note that each color component is stored in a separate array; we don't use the
708traditional layout in which the components of a pixel are stored together.
709This simplifies coding of modules that work on each component independently,
710because they don't need to know how many components there are.  Furthermore,
711we can read or write each component to a temporary file independently, which
712is helpful when dealing with noninterleaved JPEG files.
713
714In general, a specific sample value is accessed by code such as
715	GETJSAMPLE(image[colorcomponent][row][col])
716where col is measured from the image left edge, but row is measured from the
717first sample row currently in memory.  Either of the first two indexings can
718be precomputed by copying the relevant pointer.
719
720
721Since most image-processing applications prefer to work on images in which
722the components of a pixel are stored together, the data passed to or from the
723surrounding application uses the traditional convention: a single pixel is
724represented by N consecutive JSAMPLE values, and an image row is an array of
725(# of color components)*(image width) JSAMPLEs.  One or more rows of data can
726be represented by a pointer of type JSAMPARRAY in this scheme.  This scheme is
727converted to component-wise storage inside the JPEG library.  (Applications
728that want to skip JPEG preprocessing or postprocessing will have to contend
729with component-wise storage.)
730
731
732Arrays of DCT-coefficient values use the following data structure:
733
734    typedef short JCOEF;		a 16-bit signed integer
735    typedef JCOEF JBLOCK[DCTSIZE2];	an 8x8 block of coefficients
736    typedef JBLOCK *JBLOCKROW;		ptr to one horizontal row of 8x8 blocks
737    typedef JBLOCKROW *JBLOCKARRAY;	ptr to a list of such rows
738    typedef JBLOCKARRAY *JBLOCKIMAGE;	ptr to a list of color component arrays
739
740The underlying type is at least a 16-bit signed integer; while "short" is big
741enough on all machines of interest, on some machines it is preferable to use
742"int" for speed reasons, despite the storage cost.  Coefficients are grouped
743into 8x8 blocks (but we always use #defines DCTSIZE and DCTSIZE2 rather than
744"8" and "64").
745
746The contents of a coefficient block may be in either "natural" or zigzagged
747order, and may be true values or divided by the quantization coefficients,
748depending on where the block is in the processing pipeline.  In the current
749library, coefficient blocks are kept in natural order everywhere; the entropy
750codecs zigzag or dezigzag the data as it is written or read.  The blocks
751contain quantized coefficients everywhere outside the DCT/IDCT subsystems.
752(This latter decision may need to be revisited to support variable
753quantization a la JPEG Part 3.)
754
755Notice that the allocation unit is now a row of 8x8 blocks, corresponding to
756eight rows of samples.  Otherwise the structure is much the same as for
757samples, and for the same reasons.
758
759On machines where malloc() can't handle a request bigger than 64Kb, this data
760structure limits us to rows of less than 512 JBLOCKs, or a picture width of
7614000+ pixels.  This seems an acceptable restriction.
762
763
764On 80x86 machines, the bottom-level pointer types (JSAMPROW and JBLOCKROW)
765must be declared as "far" pointers, but the upper levels can be "near"
766(implying that the pointer lists are allocated in the DS segment).
767We use a #define symbol FAR, which expands to the "far" keyword when
768compiling on 80x86 machines and to nothing elsewhere.
769
770
771*** Suspendable processing ***
772
773In some applications it is desirable to use the JPEG library as an
774incremental, memory-to-memory filter.  In this situation the data source or
775destination may be a limited-size buffer, and we can't rely on being able to
776empty or refill the buffer at arbitrary times.  Instead the application would
777like to have control return from the library at buffer overflow/underrun, and
778then resume compression or decompression at a later time.
779
780This scenario is supported for simple cases.  (For anything more complex, we
781recommend that the application "bite the bullet" and develop real multitasking
782capability.)  The libjpeg.doc file goes into more detail about the usage and
783limitations of this capability; here we address the implications for library
784structure.
785
786The essence of the problem is that the entropy codec (coder or decoder) must
787be prepared to stop at arbitrary times.  In turn, the controllers that call
788the entropy codec must be able to stop before having produced or consumed all
789the data that they normally would handle in one call.  That part is reasonably
790straightforward: we make the controller call interfaces include "progress
791counters" which indicate the number of data chunks successfully processed, and
792we require callers to test the counter rather than just assume all of the data
793was processed.
794
795Rather than trying to restart at an arbitrary point, the current Huffman
796codecs are designed to restart at the beginning of the current MCU after a
797suspension due to buffer overflow/underrun.  At the start of each call, the
798codec's internal state is loaded from permanent storage (in the JPEG object
799structures) into local variables.  On successful completion of the MCU, the
800permanent state is updated.  (This copying is not very expensive, and may even
801lead to *improved* performance if the local variables can be registerized.)
802If a suspension occurs, the codec simply returns without updating the state,
803thus effectively reverting to the start of the MCU.  Note that this implies
804leaving some data unprocessed in the source/destination buffer (ie, the
805compressed partial MCU).  The data source/destination module interfaces are
806specified so as to make this possible.  This also implies that the data buffer
807must be large enough to hold a worst-case compressed MCU; a couple thousand
808bytes should be enough.
809
810In a successive-approximation AC refinement scan, the progressive Huffman
811decoder has to be able to undo assignments of newly nonzero coefficients if it
812suspends before the MCU is complete, since decoding requires distinguishing
813previously-zero and previously-nonzero coefficients.  This is a bit tedious
814but probably won't have much effect on performance.  Other variants of Huffman
815decoding need not worry about this, since they will just store the same values
816again if forced to repeat the MCU.
817
818This approach would probably not work for an arithmetic codec, since its
819modifiable state is quite large and couldn't be copied cheaply.  Instead it
820would have to suspend and resume exactly at the point of the buffer end.
821
822The JPEG marker reader is designed to cope with suspension at an arbitrary
823point.  It does so by backing up to the start of the marker parameter segment,
824so the data buffer must be big enough to hold the largest marker of interest.
825Again, a couple KB should be adequate.  (A special "skip" convention is used
826to bypass COM and APPn markers, so these can be larger than the buffer size
827without causing problems; otherwise a 64K buffer would be needed in the worst
828case.)
829
830The JPEG marker writer currently does *not* cope with suspension.  I feel that
831this is not necessary; it is much easier simply to require the application to
832ensure there is enough buffer space before starting.  (An empty 2K buffer is
833more than sufficient for the header markers; and ensuring there are a dozen or
834two bytes available before calling jpeg_finish_compress() will suffice for the
835trailer.)  This would not work for writing multi-scan JPEG files, but
836we simply do not intend to support that capability with suspension.
837
838
839*** Memory manager services ***
840
841The JPEG library's memory manager controls allocation and deallocation of
842memory, and it manages large "virtual" data arrays on machines where the
843operating system does not provide virtual memory.  Note that the same
844memory manager serves both compression and decompression operations.
845
846In all cases, allocated objects are tied to a particular compression or
847decompression master record, and they will be released when that master
848record is destroyed.
849
850The memory manager does not provide explicit deallocation of objects.
851Instead, objects are created in "pools" of free storage, and a whole pool
852can be freed at once.  This approach helps prevent storage-leak bugs, and
853it speeds up operations whenever malloc/free are slow (as they often are).
854The pools can be regarded as lifetime identifiers for objects.  Two
855pools/lifetimes are defined:
856  * JPOOL_PERMANENT	lasts until master record is destroyed
857  * JPOOL_IMAGE		lasts until done with image (JPEG datastream)
858Permanent lifetime is used for parameters and tables that should be carried
859across from one datastream to another; this includes all application-visible
860parameters.  Image lifetime is used for everything else.  (A third lifetime,
861JPOOL_PASS = one processing pass, was originally planned.  However it was
862dropped as not being worthwhile.  The actual usage patterns are such that the
863peak memory usage would be about the same anyway; and having per-pass storage
864substantially complicates the virtual memory allocation rules --- see below.)
865
866The memory manager deals with three kinds of object:
8671. "Small" objects.  Typically these require no more than 10K-20K total.
8682. "Large" objects.  These may require tens to hundreds of K depending on
869   image size.  Semantically they behave the same as small objects, but we
870   distinguish them for two reasons:
871     * On MS-DOS machines, large objects are referenced by FAR pointers,
872       small objects by NEAR pointers.
873     * Pool allocation heuristics may differ for large and small objects.
874   Note that individual "large" objects cannot exceed the size allowed by
875   type size_t, which may be 64K or less on some machines.
8763. "Virtual" objects.  These are large 2-D arrays of JSAMPLEs or JBLOCKs
877   (typically large enough for the entire image being processed).  The
878   memory manager provides stripwise access to these arrays.  On machines
879   without virtual memory, the rest of the array may be swapped out to a
880   temporary file.
881
882(Note: JSAMPARRAY and JBLOCKARRAY data structures are a combination of large
883objects for the data proper and small objects for the row pointers.  For
884convenience and speed, the memory manager provides single routines to create
885these structures.  Similarly, virtual arrays include a small control block
886and a JSAMPARRAY or JBLOCKARRAY working buffer, all created with one call.)
887
888In the present implementation, virtual arrays are only permitted to have image
889lifespan.  (Permanent lifespan would not be reasonable, and pass lifespan is
890not very useful since a virtual array's raison d'etre is to store data for
891multiple passes through the image.)  We also expect that only "small" objects
892will be given permanent lifespan, though this restriction is not required by
893the memory manager.
894
895In a non-virtual-memory machine, some performance benefit can be gained by
896making the in-memory buffers for virtual arrays be as large as possible.
897(For small images, the buffers might fit entirely in memory, so blind
898swapping would be very wasteful.)  The memory manager will adjust the height
899of the buffers to fit within a prespecified maximum memory usage.  In order
900to do this in a reasonably optimal fashion, the manager needs to allocate all
901of the virtual arrays at once.  Therefore, there isn't a one-step allocation
902routine for virtual arrays; instead, there is a "request" routine that simply
903allocates the control block, and a "realize" routine (called just once) that
904determines space allocation and creates all of the actual buffers.  The
905realize routine must allow for space occupied by non-virtual large objects.
906(We don't bother to factor in the space needed for small objects, on the
907grounds that it isn't worth the trouble.)
908
909To support all this, we establish the following protocol for doing business
910with the memory manager:
911  1. Modules must request virtual arrays (which may have only image lifespan)
912     during the initial setup phase, i.e., in their jinit_xxx routines.
913  2. All "large" objects (including JSAMPARRAYs and JBLOCKARRAYs) must also be
914     allocated during initial setup.
915  3. realize_virt_arrays will be called at the completion of initial setup.
916     The above conventions ensure that sufficient information is available
917     for it to choose a good size for virtual array buffers.
918Small objects of any lifespan may be allocated at any time.  We expect that
919the total space used for small objects will be small enough to be negligible
920in the realize_virt_arrays computation.
921
922In a virtual-memory machine, we simply pretend that the available space is
923infinite, thus causing realize_virt_arrays to decide that it can allocate all
924the virtual arrays as full-size in-memory buffers.  The overhead of the
925virtual-array access protocol is very small when no swapping occurs.
926
927A virtual array can be specified to be "pre-zeroed"; when this flag is set,
928never-yet-written sections of the array are set to zero before being made
929available to the caller.  If this flag is not set, never-written sections
930of the array contain garbage.  (This feature exists primarily because the
931equivalent logic would otherwise be needed in jdcoefct.c for progressive
932JPEG mode; we may as well make it available for possible other uses.)
933
934The first write pass on a virtual array is required to occur in top-to-bottom
935order; read passes, as well as any write passes after the first one, may
936access the array in any order.  This restriction exists partly to simplify
937the virtual array control logic, and partly because some file systems may not
938support seeking beyond the current end-of-file in a temporary file.  The main
939implication of this restriction is that rearrangement of rows (such as
940converting top-to-bottom data order to bottom-to-top) must be handled while
941reading data out of the virtual array, not while putting it in.
942
943
944*** Memory manager internal structure ***
945
946To isolate system dependencies as much as possible, we have broken the
947memory manager into two parts.  There is a reasonably system-independent
948"front end" (jmemmgr.c) and a "back end" that contains only the code
949likely to change across systems.  All of the memory management methods
950outlined above are implemented by the front end.  The back end provides
951the following routines for use by the front end (none of these routines
952are known to the rest of the JPEG code):
953
954jpeg_mem_init, jpeg_mem_term	system-dependent initialization/shutdown
955
956jpeg_get_small, jpeg_free_small	interface to malloc and free library routines
957				(or their equivalents)
958
959jpeg_get_large, jpeg_free_large	interface to FAR malloc/free in MSDOS machines;
960				else usually the same as
961				jpeg_get_small/jpeg_free_small
962
963jpeg_mem_available		estimate available memory
964
965jpeg_open_backing_store		create a backing-store object
966
967read_backing_store,		manipulate a backing-store object
968write_backing_store,
969close_backing_store
970
971On some systems there will be more than one type of backing-store object
972(specifically, in MS-DOS a backing store file might be an area of extended
973memory as well as a disk file).  jpeg_open_backing_store is responsible for
974choosing how to implement a given object.  The read/write/close routines
975are method pointers in the structure that describes a given object; this
976lets them be different for different object types.
977
978It may be necessary to ensure that backing store objects are explicitly
979released upon abnormal program termination.  For example, MS-DOS won't free
980extended memory by itself.  To support this, we will expect the main program
981or surrounding application to arrange to call self_destruct (typically via
982jpeg_destroy) upon abnormal termination.  This may require a SIGINT signal
983handler or equivalent.  We don't want to have the back end module install its
984own signal handler, because that would pre-empt the surrounding application's
985ability to control signal handling.
986
987The IJG distribution includes several memory manager back end implementations.
988Usually the same back end should be suitable for all applications on a given
989system, but it is possible for an application to supply its own back end at
990need.
991
992
993*** Implications of DNL marker ***
994
995Some JPEG files may use a DNL marker to postpone definition of the image
996height (this would be useful for a fax-like scanner's output, for instance).
997In these files the SOF marker claims the image height is 0, and you only
998find out the true image height at the end of the first scan.
999
1000We could read these files as follows:
10011. Upon seeing zero image height, replace it by 65535 (the maximum allowed).
10022. When the DNL is found, update the image height in the global image
1003   descriptor.
1004This implies that control modules must avoid making copies of the image
1005height, and must re-test for termination after each MCU row.  This would
1006be easy enough to do.
1007
1008In cases where image-size data structures are allocated, this approach will
1009result in very inefficient use of virtual memory or much-larger-than-necessary
1010temporary files.  This seems acceptable for something that probably won't be a
1011mainstream usage.  People might have to forgo use of memory-hogging options
1012(such as two-pass color quantization or noninterleaved JPEG files) if they
1013want efficient conversion of such files.  (One could improve efficiency by
1014demanding a user-supplied upper bound for the height, less than 65536; in most
1015cases it could be much less.)
1016
1017The standard also permits the SOF marker to overestimate the image height,
1018with a DNL to give the true, smaller height at the end of the first scan.
1019This would solve the space problems if the overestimate wasn't too great.
1020However, it implies that you don't even know whether DNL will be used.
1021
1022This leads to a couple of very serious objections:
10231. Testing for a DNL marker must occur in the inner loop of the decompressor's
1024   Huffman decoder; this implies a speed penalty whether the feature is used
1025   or not.
10262. There is no way to hide the last-minute change in image height from an
1027   application using the decoder.  Thus *every* application using the IJG
1028   library would suffer a complexity penalty whether it cared about DNL or
1029   not.
1030We currently do not support DNL because of these problems.
1031
1032A different approach is to insist that DNL-using files be preprocessed by a
1033separate program that reads ahead to the DNL, then goes back and fixes the SOF
1034marker.  This is a much simpler solution and is probably far more efficient.
1035Even if one wants piped input, buffering the first scan of the JPEG file needs
1036a lot smaller temp file than is implied by the maximum-height method.  For
1037this approach we'd simply treat DNL as a no-op in the decompressor (at most,
1038check that it matches the SOF image height).
1039
1040We will not worry about making the compressor capable of outputting DNL.
1041Something similar to the first scheme above could be applied if anyone ever
1042wants to make that work.
1043