1@chapsummary
2
3 This 2nd level library provides representation for @emph{cameras}. A
4camera enables the projection of 3-d structures into an image. A large
5segment of computer vision research is devoted to the calibration of
6cameras and the use of cameras to visualize scenes as well as
7reconstruct the 3-d geometry of scenes from multiple views. The
8@code{vpgl} library provides the essential classes and algorithms for
9carrying out these tasks. The name @code{vpgl}, stands for VXL
10photogrammetry library. The name is derived from the field of
11photogrammetry which is also focused on computations involving the
12projection of the 3-d world into images and the recovery of 3-d
13structure from multiple images. Works on photogrammetry date back several centurites and thus provide the inspiration for the name.
14@endchapsummary
15
16@section Class Overview
17A Universal Modeling Language (UML) presentation of the key
18@code{vpgl} classes is shown in Figure 1. The @code{camera} classes
19represent various parameterizations of the projection from 3-d to
202-d. The @code{fundamental_matrix} and @code{essential_matrix} define
21the geometric relationship between a pair of images. The @code{lvcs}
22class is necessary to provide a mapping between geographic coordinates
23and a local Euclidean reference frame. A summary of the role of each
24camera class is itemized as follows.
25
26@itemize @bullet
27@item @code{vpgl_camera<T>} - The base class for all cameras. The virtual function common to all camera class is @code{void project(T x, T y, T z, T u, T v)}. This function projects a given 3-d point, (x, y, z), into a 2-d image location (u, v).
28@item @code{vpgl_proj_camera<T>} - A camera that prameterizes the
29projection as a 3x4 matrix. A 3-d point to be projected is represented
30in homogenous coordinates as, for example, a
31@code{vgl_homg_point_3d<T>}. The corresponding four-element vector is
32multiplied by the projection matrix to produce the three homogeneous
33coordinates of an image point. The image point is represented, for
34example, as a @code{vgl_homg_point_2d<T>}. The use of homogeneous
35coordinates accounts for points that may be at infinity in either the
363-d world or the image.
37@item @code{vpgl_perspective_camera<T>} - A camera that is defined by
38a triple of parameter classes: (K) @code{vpgl_calibration_matrix<T>}; (R)
39 @code{vgl_rotation_3d<T>}; and (t) - @code{vgl_vector_3d<T>}. The
40matrix K represents the internal parameters of a physical camera, such
41as focal length. The 3-d rotation, R,  defines the orientation of the
42camera coordinate system with repect to the world origin. The vector,t, is the translation of the camera center with respect to the world origin.
43@item @code{vpgl_affine_camera<T>} - A camera that has its center of
44projection at infinity. Thus the camera rays are all parallel to each
45other. This camera produces significantly simpler computation of the
46properties of projected 3-d geometry. For example, parallel lines in
47the 3-d world project to parallel lines in the image, which isn't the
48case for a perspective camera. The affine camera is parameterized by a
493x4 matrix where the last row is (0 0 0 1).
50@item @code{vpgl_rational_camera<T>} - The projection of a rational
51camera is reprsented by four cubic polynomials in x, y and z, and
52associated scale and offsets. This representation can accurately model
53the image mapping produced by a scanning line camera such as is
54employed in satellite image collectors. There are 90 parameters
55overall, 80 to represent the polynomial coefficients and 10 for the
56scale and offsets. The projection is from geographic coordinates,
57longitude, latitude and elevation to image row (line) and column
58(sample).
59@item @code{vpgl_local_rational_camera<T>} - Similar to the rational
60camera except the 3-d point coordinates are expressed in a local
61tangent plane to the Earth's surface at a point specified by the local
62vertical coordinate system (lvcs). The lvcs is reprsented by the @code{vpgl_lvcs} class.
63@item @code{vpgl_generic_camera<T>} - A camera that is represented by a grid of rays. This structure can represent very general forms of world to image mapping, e.g. a camera that has multiple centers of projection.
64@end itemize
65
66@figure
67@image{vpgl_classes,,3in}
68@caption{1}
69The UML description of the major classes in @code{vpgl}.
70@endcaption
71@endfigure
72It is noted that @code{vpgl} is dependent on two other core libraries, @code{vgl} and @code{vnl}.
73@subsection @code{vpgl_proj_camera}
74The projective camera is based on the computations inherent in
75projective geometry. As described in the chapter on @code{vgl}, in
76order to uniformly represent points at infinity as well as finite
77points it is necessary to use homogenous coordinates where an extra
78scale factor is appended to the Euclidian coordinates. Thus a point in
793-d is represented by the vector @math{X = (x, y, z, 1)^t}. The
80projection is then defined by,
81
82 @math{(@lambda u, @lambda v, @lambda)^t = [3@times4]@ X},
83
84 where @math{[3@times4]} is the 3x4 projection matrix and
85@math{@lambda} is called the projective scale factor. The @code{vgl}
86library reprsesents both 2-d and 3-d points in homogeneous
87coordinates, i.e., @code{vgl_homg_pont_2d<T>} and
88@code{vgl_homg_point_3d<T>} respectively. @code{vpgl_proj_camera<T>}
89also supports an interface to @code{vnl}, where homogenous coordinates
90are represented by vnl vectors. The projection matrix of the
91@code{vpgl_proj_camera} is represented internally as a
92@code{vnl_matrix_fixed<T, 3, 4>}.
93
94The projective camera also supports @emph{backprojection} which casts
95a ray through a point in the image. A first point on the ray is
96defined as the center of projection, @math{C}, which is the nullvector
97of the projection matrix. That is,
98
99@math{(0, 0, 0)^t = [3@times4]@ (C_x, C_y, C_z, 1)^t}.
100
101The second point on the ray, @code{X}, is any solution to the equation,
102
103@math{(u, v, 1)^t = [3@times4]@ X},
104
105where @math{(u, v, 1)^t} is the image point being backprojected. The
106solution of this equation uses the @code{vnl_svd} @code{solve} method,
107and so the SVD is cached as a member of @code{vpgl_proj_camera} for
108efficiency.
109
110The following example illustrates the use of @code{vpgl_proj_camera} basic projection methods.
111@example
112  vnl_matrix_fixed<double,3,4> M( 0.0 );
113  M[0][0] = M[1][1] = M[2][2] = M[2][3] = 1;
114  vpgl_proj_camera<double> pcam(M);
115  vgl_homg_point_3d<double> X(1, 2, 9, 1);
116  vgl_homg_point_2d<double> x;
117  x = pcam.project(X); // x = pcam(X) also works
118@end example
119The result is @math{x = (1, 2, 10)^t}, or @math{u=0.1, v = 0.2}. Note that the () operator is assigned to represent a shorthand form for @code{project}.
120
121An example of backprojection is as follows.
122@example
123   vgl_homg_point_2d<double> p(0.0, 0.0);
124   vgl_homg_line_3d_2_points<double> l3d = pcam.backproject(p);
125   vgl_ray_3d<double> r3d = pcam.backproject_ray(p);
126@end example
127The line @code{l3d} is represented by two points, the finite point,
128@code{l3d.point_finite()}, which corresponds to the center of
129projection of the camera. The second point,
130@code{l3d.point_infinite()}, is an ideal point (point at infinity) and
131corresponds to the tangent vector along the line. An alternative
132representation of the result of backprojection is
133@code{vgl_ray_3d<T>}, which is described by a 3-d origin,
134@code{vgl_point_3d<T>}, and a ray direction, @code{vgl_vector_3d<T>}.
135
136@subsection @code{vpgl_perspective_camera}
137@figure
138@image{perspectivecamera,,3in}
139@caption{2}
140The geometry of the perspective camera.
141@endcaption
142@endfigure
143The parameters of the perspective camera geometry are grouped in terms
144of @emph{interor} and @emph{exterior} parameters. The exterior
145parameters are the 3-d rotation, @math{R}, and translation vector,
146@math{t}, that relate the coordinate system of Figure 2 to the world
147origin. The position of the camera center, @math{C}, is related to the
148exterior camera parameters by @math{C = -R^t t}.The interior
149parameters shown in Figure 2.
150
151The key interior parameters
152are the principal point, @math{p}, where the ray from the center of
153projection, perpendicular to the image plane, pierces the image plane,
154and the focal length, @math{f}, which is the distance from the center of
155projection to the image plane along the principal ray.
156
157The perspective camera projection maps a 3-d homogenous point,
158@math{X}, into a homogenous image point, @math{x}. The projection
159operation is factored into two matrix components as, @math{ x =
160K[R|t]X}, where @math{[R|t]} is a partitioned 3x4 matrix. The 3x3
161matrix, @math{K} is called the @emph{calibration matrix} and
162represents the internal camera parameters. That is,
163@example
164      _          _
165     | fx  s   px |
166 K = |  0  fy  py |
167     |  0  0   1  |
168      -         -
169@end example
170The elements, @code{fx} and @code{fy} are the focal length represented
171in units of the physical pixel size in the @code{x} and @code{y} coordinate
172system of the imaging device. A typical pixel dimenision is, @code{dx
173= dy = 5.0} microns. Thus for a 50mm focal length, @code{fx = fy =
17410000.0}. The element @code{s} represents the skew that results if the image plane is tilted with respect to the lens axis. The principal point image coordinates are @code{px} and @code{py}.
175
176The use of @code{vpgl} classes in constructing and accessing a perspective camera is illustrated by the following example.
177@example
178 // The default constructor is the identity matrix
179 vpgl_calibration_matrix<double> K;
180 K.set_focal_length(50.0e-3);
181 K.set_x_scale(5.0e-6);
182 K.set_y_scale(5.0e-6);
183 vgl_point_2d<double> pp(640.0, 384.0); //for a 1280x768 image
184 K.set_principal_point(pp);
185 vgl_rotation_3d<double> R;// the identiy rotation
186 vgl_vector_3d<double> t(0, 0, 10.0); //Translate along z.
187 vpgl_perspective_camera<double> per_cam(K, R, t);
188 vgl_homg_point_3d<double> X(1.0, 2.0, 9.0, 1.0).
189 vgl_homg_point_2d<double> x = per_cam(X);
190 // the principal ray direction
191 vgl_vector_3d<double> pray_dir = per_cam.principal_axis();
192 // point the camera so that the principal ray
193 // pierces the specified point
194 vgl_homg_point_3d<double> origin(0, 0, 0);
195 per_cam.look_at(origin);
196@end example
197
198@subsection @code{vpgl_affine_camera}
199The affine camera is a type of projective camera where the center of
200projection is at infinity (an ideal point). In this case, all the
201camera rays are parallel, i.e. they intersect at infinity. The main
202addition to this class is a viewing distance to provide a finite
203location for the origin of the camera rays. A finite ray origin is
204useful for ray tracing algorithms. The affine camera matrix can be
205specified by the first tworows of the 3x4 projection matrix, since the
206bottom row is @math{[0 @ 0 @ 0 @ 1]}. An example illustrating the affine camera interface follows.
207@example
208  vnl_vector_fixed<double, 4> r0(1.0, 0.0, 0.0, 0.0);
209  vnl_vector_fixed<double, 4> r1(0.0, 1.0, 0.0, 0.0);
210  vpgl_affine_camera<double> aff_cam(r0, r1);
211  aff_cam.set_viewing_distance(1000.0);//1Km above the scene
212  vgl_homg_plane_3d<double> pplane = aff_cam.principal_plane();
213
214@end example
215In this example, the @code{principal_plane} is perpendicular to the camera rays and is positioned so the perpedicular distance from the origin is the @code{viewing_distance}.
216
217@subsection @code{vpgl_rational_camera}
218 The rational camera does not reprsent physical parameters of
219projection as in the case of the perspective camera. Instead the
220camera computation is an approximation to some physical projection
221process, based on rational polynomials. This formulation arises since
222the actual physical model may have a large number of parameters and
223the projection computation may involve thousands of calculations. An
224example is a @emph{pushbroom } image sensor, where a moving linear
225array is scanned over the scene by a rotating platform. The motion of
226the platform may be non-uniform with overshoot and oscillations. This
227complex scanning motion can be represented by a polynomial
228approximation with far fewer parameters than the kinetic motion
229parameters of the scanning platform. The standard rational camera is
230based on cubic polynomials in (x, y, z) with units in the wgs84 geographic coordinate system of longitude (x), latitude (y) and elevation (z).
231
232A cubic polynomial in three variables requires 20 monomial terms, such as @math{x^3, x^2y, ..., y^3, ... 1}. There are four such polynomials used to compute the row and column coordinates of the projected 3-d geographic position. That is,
233
234 @math{ u = {P(x, y, z) @over Q(x, y,z)}}  and @math{v = {R(x, y, z) @over S(x, y, z)}},
235
236where @math{P, Q, R, S} are cubic polynomials. The polynomial coefficients are computed from the recorded motion states of the sensor platform during the image scan. This representation is also called a Rational Polynomial Coefficient (RPC) model. It is common for commercial satellite images to have associated metadata that includes the 90 parameters for the RPC model.
237
238The standard units of x and y are degrees and the unit of z is the
239meter. These units are significantly different in scale for the same
240Euclidan distance. For example on the surface of the Earth, one degree
241is about 100Km. To avoid large inaccuracies in the projection due to
242numerical precision it is necessary to offset and scale both the 3-d
243and 2-d coordinates onto the range @math{[-1,@ 1]}. This normalization
244is defined by ten additional camera parameters, which represent scale
245and offset coefficients, two for each of the five world and image
246coordinates. An example of constructing and applying a the rational camera is provided by the following example.
247@example
248//Rational polynomial coefficients
249//Rational polynomial coefficients
250vcl_vector<double> neu_u(20,0.0), den_u(20,0.0),
251neu_v(20,0.0),den_v(20,0.0);
252
253neu_u[0]=0.1; neu_u[10]=0.071; neu_u[7]=0.01;  neu_u[9]=0.3;
254neu_u[15]=1.0; neu_u[18]=1.0, neu_u[19]=0.75;
255
256den_u[0]=0.1; den_u[10]=0.05; den_u[17]=0.01; den_u[9]=1.0;
257den_u[15]=1.0; den_u[18]=1.0; den_u[19]=1.0;
258
259neu_v[0]=0.02; neu_v[10]=0.014; neu_v[7]=0.1; neu_v[9]=0.4;
260neu_v[15]=0.5; neu_v[18]=0.01; neu_v[19]=0.33;
261
262den_v[0]=0.1; den_v[10]=0.05; den_v[17]=0.03; den_v[9]=1.0;
263den_v[15]=1.0; den_v[18]=0.3; den_v[19]=1.0;
264
265//Scale and offsets
266double sx = 50.0, ox = 150.0;
267double sy = 125.0, oy = 100.0;
268double sz = 5.0, oz = 10.0;
269double su = 1000.0, ou = 500;
270double sv = 500.0, ov = 200;
271//construct the camera
272vpgl_rational_camera<double> rcam(neu_u, den_u, neu_v, den_v,
273                                  sx, ox, sy, oy, sz, oz,
274                                  su, ou, sv, ov);
275vgl_point_3d<double> X(150.0, 100.0, 10.0);
276vgl_point_2d<double> x;
277x = project(X); // xu = 1250.0, xv = 365.0
278
279@end example
280It is noted that the @code{vpgl_rational_camera} class does not have a @code{back_project} method. Solving a system of third order polynomials is not feasible in closed form and involves elaborate non-linear optimization routines. Thus the backprojection operations are relegated to the @code{/algo} sub-library.
281@subsection @code{vpgl_local rational_camera}
282The geographic spherical coordinates defined for the standard rational
283camera are not convenient when processing geometry such as 3-d models
284of buildings. For example, determining if planar surfaces are
285orthogonal is an involved computation when the planes are described in
286spherical Earth coordinates. It is often simpler to convert the
287geographic coordinates to a local Euclidian frame. The Euclidia frame
288is based on a local tangent plane at a specified geographic origin as
289shown in Figure 3. If the scene of interest is relatively small, say
290less than 1Km, then the local tangent approximation is sufficiently
291accurate for most computer vision applications.
292
293The @code{vpgl_local_rational_camera} is a sub-class of the @code{vpgl_rational_camera}, with an added member that defines a local vertical coordinate frame. The member is an instance of the class, @code{vpgl_lvcs}.
294@figure
295@image{lvcs,,3in}
296@caption{3}
297A local vertical coordinate system defined on the surface of the Earth.
298@endcaption
299@endfigure
300Following is an example of constructing a local rational camera.
301@example
302// read the rational camera from a file in rpb format
303vpgl_rational_camera<double> rcam;
304vcl_ifstream is("./rpc_camera.rpb");
305is >> rcam ;
306// get the center of the polynomial approximation volume
307double xoff = rcam.offset(vpgl_rational_camera<double>::X_INDX);
308double yoff = rcam.offset(vpgl_rational_camera<double>::Y_INDX);
309double zoff = rcam.offset(vpgl_rational_camera<double>::Z_INDX);
310// set the lvcs origin
311vpgl_lvcs lvcs(yoff, xoff, zoff);
312// construct the local rational camera
313vpgl_local_rational_camera<double> lrcam(lvcs, rcam);
314double ul, vl;
315//ul and vl will be at the center of the image
316lrcam.project(0.0, 0.0, 0.0, ul, vl);
317
318@end example
319There is a somewhat standard ascii format for rational polynomial coefficients called a ``RPB'' file. Alternatively, the National Imagery Transmission Format (NITF) header specifies a datablock for RPC coefficients. The class, @code{vpgl_nitf_rational_camera} class in the @code{file_formats} sub-directory is provided to extract the coefficients from NITF image files.
320@subsection @code{vpgl_generic_camera}
321The generic camera is represented by a grid of rays, one per pixel of the camera image. In @code{vpgl} the rays are stored in a pyramid of image resolutions to support efficient computation of the @code{project} method. That is, it is necessary to find a ray in the highest resolution grid that is closest to the specified 3-d point. This search is done hierarchically for speed. The backproject function is trival since each pixel has an associated ray. The main motiviation for this camera class is to support ray tracing in volumetric processing.
322@subsection @code{vpgl_fundamental_matrix}
323The geometric relationship between a pair of images is captured by the @emph{fundamental matrix}, as shown in Figure 4. In the field of photogrammetry, this relationship is called @emph{relative orientation}. Given a point, @math{x}, in a left camera view, its backprojection forms a ray that intersects the 3-d point, @math{X}, that projects to @math{x}. However the location of @math{X} along the ray cannot be determined from a single image. Thus the image point @math{x} defines a 3-d ray that projects into a second image as a line. In the figure this image line is denoted by @math{l'}. The image point in the right camera corresponding to the projection of @math{X}, @math{x'},  must lie on @math{l'} as shown.
324
325It is also the case that the line @math{l'} intersects a point,
326@math{e'}, called the @emph{epipole}. Two epipoles are constructed as the
327intersection of the line joining the two camera centers with each
328image, i.e., @math{e, e'}. Thus, the relationship between the two
329cameras is also refered to as epipolar geometry. The relationship between the point @math{x} and the @line{l'} is defined by a 3x3 matrix, @math{F}. That is,
330
331@math{l' = F@ x}.
332
333The matrix, @math{F}, is called the fundamental matrix.
334
335
336The fundamental matrix plays an important role in finding the correspondence of feature points between two images, for example in tracking objects in video sequences. If two image features one each in a pair of images correspond to the same 3-d point, it must be the case that the @emph{epipolar constraint} is statisfied by the two points, where,
337
338@math{x'^tFx = 0}
339
340This constraint is used to verify that proposed feature correspondences are consistent with the epipolar geometry of the two views being matched.
341
342@figure
343@image{epipolar,,3in}
344@caption{4}
345The geometry of the fundamental matrix.
346@endcaption
347@endfigure
348
349The fundamental matrix can be constructed from two cameras as illustrated in the following code fragment.
350@example
351// Ml and Mr, some 3x4 matrices
352vpgl_proj_camera<double> Cl(Ml),Cr(Mr) // right and left cameras
353vpgl_fundamental_matrix<double> F1( C1r, C1l );
354@end example
355It is also the case that the fundamental matrix is sufficient to
356define two cameras, although not uniquely. The method for extracting
357the ``left'' camera, assuming the ``right'' camera is the identity
358matrix is,
359
360@example
361vpgl_proj_camera<T>
362 extract_left_camera(const vnl_vector_fixed<T,3>& v, T lambda ) const;
363@end example
364The meaning of the arguments @code{v} and @code{lamba} are defined in Hartley and Zisserman, 2nd edition, p. 256.
365
366An example of accessing one of the the epipolar lines is illustrated below.
367@example
368vpgl_fundamental_matrix<double> F;
369vgl_homg_point<double> p(1.0, 2.0);
370//The right epipolar line
371vgl_homg_line_2d<double> l_r = F.r_epipolar_line( p );
372@end example
373
374
375@section Lens Distortion
376In addition to camera projection models, there is often a need to model camera lens distortions.
377Lens distortions are modeled as a 2D-to-2D image-space transformations.
378Currently, lens distortion is implemented separately from camera models, but it should be
379possible to build compound camera classes that combine a simple camera projections with a lens distortion.
380Lens distortion is implemented in a class hierarchy analogous to the camera classes.
381
382@subsection @code{vpgl_lens_distortion}
383The base class for all lens distortions is @code{vpgl_lens_distortion}.
384This abstract base class provides method distorting and undistorting pixel locations.
385The @code{distort} function is abstract and should be implemented by derived classes.
386The @code{undistort} function is the inverse operation of @code{distort}.
387The base class is designed to have a default implementation @code{undistort} which uses
388an iterative solver to find an inverse mapping.
389However, this solver still needs to be implemented for the general case.
390
391@subsection @code{vpgl_radial_distortion}
392This class is derived from @code{vpgl_lens_distortion} and is specialized for the case
393of lens distortions which are radially symmetric about some center point.
394Distortion is simplified to a 1D function applied along rays emanating from the center point.
395This class implements @code{distort} in terms of an abstract function @code{distort_radius}.
396An implementation of an iterative solver is provided for @code{undistort}.
397
398@subsection @code{vpgl_poly_radial_distortion}
399This is a concrete class derived from @code{vpgl_radial_distortion} that models
400the 1D radial distortion function with an n-th order polynomial.
401
402
403@section Algorithms in @code{vpgl_algo}
404A number of algorithms have been developed that make use of the camera data structures in vpgl. These algorithms are described in the following subsections.
405@subsection @code{vpgl_fm_compute_*_point}
406The library contains implementations of computing the fundamental matrix from differing numbers of correspondences between the left and right images. The most widely used algorithm is the so-called eight-point algorithm. An example of its application follows.
407@example
408// corresponding points
409vcl_vector< vgl_homg_point_2d<double> > p1r, p1l;
410vpgl_fm_compute_8_point fmc;
411vpgl_fundamental_matrix<double> fm1est;
412fmc.compute( p1r, p1l, fm1est );
413@end example
414
415@subsection @code{vpgl_ortho_procrustes}
416The orthogonal procrustes problem is defined by the relationship between
417two given 3-d pointsets, i.e.,
418
419@math{X_i = s(RY_i +t)},
420
421where the goal is to determine the scale, @math{s}, the 3-d rotation
422@math{R}, and the translation vector @math{t}. The constructor for the
423@code{vpgl_ortho_procrustes} takes in the two point sets and calling
424any of the accessors for the unkown transformation parameters will
425trigger solution processing. An example of using the algorithm follows.
426@example
427  // two sets of five 3-d points
428  vnl_matrix<double> X(3, 5), Y(3, 5);
429  vpgl_ortho_procrustes op(X, Y);
430  vgl_rotation_3d<double> R = op.R();
431  double s = op.s();
432  vnl_vector_fixed<double, 3> t = op.s();
433  double error = op.residual_mean_sq_error();
434
435@end example
436
437@subsection @code{vpgl_camera_compute}
438This algorithm computes camera parameters from known correspondences between a set of 3-d points and their projection into the camera image plane. An example is the  static method,
439
440@example
441bool vpgl_proj_camera_compute::
442   compute( const vcl_vector< vgl_homg_point_2d<double> >& image_pts,
443            const vcl_vector< vgl_homg_point_3d<double> >& world_pts,
444            vpgl_proj_camera<double>& camera );
445@end example
446
447There is also the class @code{vpgl_perspective_camera_compute}, which can find the @math{K}, @math{R}, @math{t} parameters of the perspective camera, or just @math{R} and @math{t}, given @math{K}.
448@subsection @code{vpgl_camera_convert}
449This algorithm class converts between various camera types. A typical example is to convert from a rational camera to a generic camera. Recall that a generic camera is represented by a grid of 3-d rays, one for each pixel in a specified image. It is effective to convert the rational camera to a generic camera to enable efficient ray tracing computations. An example of the conversion interface follows.
450@example
451bool convert( vpgl_local_rational_camera<double> const& rat_cam,
452              int ni, int nj,
453              vpgl_generic_camera<double> & gen_cam, unsigned level = 0);
454@end example
455The @code{level} argument refers to the representation of the camera rays, which are stored in a pyramid structure to enable fast computation for projecting a 3-d point into the camera image. @code{level=0} corresponds to the base image resolution of the pyramid.
456@subsection @code{vpgl_rational_adjust_onept}
457In many cases the rational polynomial projection exhibits very high
458relative accuracy, but there can be a small pointing error that can be
459corrected by translating the image plane. It is often the case that an
460image analyst will select a single corespondence in multiple images
461and then the cameras are adjusted so that the corresponding 3-d point
462projects correctly in each image. The translations are computed so that
463each camera translation is as small as possible, while maintaining a consistent projection from 3-d to 2d. The geometry of the adjustment process is shown in Figure 5.
464@figure
465@image{rational_adjust,,3in}
466@caption{5}
467The adjustment of rational camera @math{(u, v)} translation offsets.
468@endcaption
469@endfigure
470An example of the algorithm use is given in the example below.
471@example
472//single image correspondence to correct cameras
473vgl_point_2d<double> p1(25479.9, 409.113), p2(17528.2, 14638);
474
475vcl_vector<vgl_point_2d<double> > corrs;
476corrs.push_back(p1);   corrs.push_back(p2);
477
478vcl_vector<vpgl_rational_camera<double> > cams(2);
479cams[0]= rcam1;
480cams[1]= rcam2;
481vcl_vector<vgl_vector_2d<double> > cam_trans;
482
483vgl_point_3d<double> intersection;
484bool good  = vpgl_rational_adjust_onept::adjust(cams, corrs, cam_trans,
485
486@end example
487
488@subsection @code{vpgl_rational_adjust}
489This algorithm carries out a similar function to @code{vpgl_rational_adjust_onept} but instead adjusts the translation of a single rational camera from a set of correspondences between 3-d points and 2-d image points. The signature is,
490@example
491bool adjust(vpgl_rational_camera<double> const& initial_rcam,
492            vcl_vector<vgl_point_2d<double> > img_pts,
493            vcl_vector<vgl_point_3d<double> > geo_pts,
494            vpgl_rational_camera<double> & adj_rcam)
495@end example
496
497@subsection @code{vpgl_backproject}
498It is useful for many applications to determine a 3-d point as the
499backprojection of an image point. The 3-d point is not defined unless
500a 3-d surface is also specified. This algorithm class applies
501non-linear optimization to find the closest 3-d point on a 3-d surface
502to a backprojected camera ray.
503
504A polymorphic form of the backproject method is as follows.
505@example
506bool bproj_plane(const vpgl_camera<double>* cam,
507                 vnl_double_2 const& image_point,
508                 vnl_double_4 const& plane,
509                 vnl_double_3 const& initial_guess,
510                 vnl_double_3& world_point);
511@end example
512Note that since a non-linear solution is required in general (e.g. a rational camera) it is necessary to provide a guess as the starting point for the solution. In most cases the initial guess can just be the origin.  Another method on @code{vpgl_backproject} with a specific rational camera interface is shown below.
513@example
514...
515vpgl_rational_camera<double> rcam(neu_u, den_u, neu_v, den_v,
516                                  sx, ox, sy, oy, sz, oz,
517                                  su, ou, sv, ov)
518vnl_double_4 plane;
519vnl_double_2 image_point;
520image_point[0]=1250.0;   image_point[1]=332;
521initial_guess[0]=200.0; initial_guess[1]=150.0; initial_guess[2]=15.0;
522plane[0]=0; plane[1]=0; plane[2]=1.0; plane[3]=-10.0;
523
524bool success =
525  vpgl_backproject::bproj_plane(rcam, image_point, plane,
526                                initial_guess, world_point);
527@end example
528
529@subsection @code{vpgl_ray}
530This class contains various static methods for computing camera rays. For example, a non-linear computation is needed to determine a ray from a rational camera at a given 3-d point in space. The method signature is,
531@example
532bool ray(vpgl_rational_camera<double> const& rcam,
533         vnl_double_3 const& point_3d,
534         vnl_double_3& ray)
535@end example
536
537Another example based a @code{vgl} interface and with an abstract camera is,
538@example
539bool ray(const vpgl_camera<double>*  cam,
540         vgl_point_3d<double> const& point_3d,
541         vgl_vector_3d<double>& ray)
542@end example
543
544This method polymorphically computes the ray, depending on the camera subclass.
545
546@subsection @code{vpgl_ray_intersect}
547This algorithm class computes the 3-d point that lies closest to a set of camera rays. The algorithm assumes a non-linear solution to include the rational camera. An example intersection computation is illustrated in the following example.
548@example
549  vcl_vector<vpgl_camera<double>* > cams(2);
550  cams[0]= (vpgl_camera<double>*)(&rcam1);
551  cams[1]= (vpgl_camera<double>*)(&rcam2);
552  vcl_vector<vgl_point_2d<double> > image_pts;
553  image_pts.push_back(p1);   image_pts.push_back(p2);
554  vpgl_ray_intersect ri(2);
555  vgl_point_3d<double> intersection;
556  vgl_point_3d<double> initial_point(44.3542,33.1855 ,32);
557  bool good =
558    ri.intersect(cams, image_pts, initial_point, intersection);
559@end example
560Note that a class instance is required, rather than a static method.
561