stationary_camera_self_calibrationT_stationary_camera_self_calibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibration — Perform a self-calibration of a stationary projective camera.
stationary_camera_self_calibrationstationary_camera_self_calibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibration performs a
self-calibration of a stationary projective camera. Here,
stationary means that the camera may only rotate around the optical
center and may zoom. Hence, the optical center may not move.
Projective means that the camera model is a pinhole camera that can
be described by a projective 3D-2D transformation. In particular,
radial distortions can only be modeled for cameras with constant
parameters. If the lens exhibits significant radial distortions
they should be removed, at least approximately, with
change_radial_distortion_imagechange_radial_distortion_imageChangeRadialDistortionImageChangeRadialDistortionImageChangeRadialDistortionImage.
The camera model being used can be described as follows:
x = P * X .
Here, x is a homogeneous 2D vector,
X a homogeneous 3D vector, and P
a homogeneous 3x4 projection matrix. The projection
matrix P can be decomposed as follows:
Here, R is a 3x3 rotation matrix
and t is an inhomogeneous 3D vector. These two
entities describe the position (pose) of the camera in 3D space.
This convention is analogous to the convention used in
camera_calibrationcamera_calibrationCameraCalibrationCameraCalibrationCameraCalibration, i.e., for
R=I and t=0 the x
axis points to the right, the y axis downwards, and the z axis
points forward. K is the calibration matrix of
the camera (the camera matrix) which can be described as follows:
Here, f is the focal length of the camera in pixels, a the
aspect ratio of the pixels, s is a factor that models the skew of
the image axes, and (u,v) is the principal point of the camera in
pixels. In this convention, the x axis corresponds to the column
axis and the y axis to the row axis.
Since the camera is stationary, it can be assumed that
t=0. With this convention, it is easy to see
that the fourth coordinate of the homogeneous 3D vector
X has no influence on the position of the
projected 3D point. Consequently, the fourth coordinate can be set
to 0, and it can be seen that X can be regarded as
a point at infinity, and hence represents a direction in 3D. With
this convention, the fourth coordinate of X can be
omitted, and hence X can be regarded as
inhomogeneous 3D vector which can only be determined up to scale
since it represents a direction. With this, the above projection
equation can be written as follows:
If two images of the same point are taken with a stationary camera,
the following equations hold:
From the above equation, constraints on the camera parameters can be
derived in two ways. First, the rotation can be eliminated from the
above equation, leading to equations that relate the camera matrices
with the projective 2D transformation between the two images. Let
be the projective transformation from
image i to image j. Then,
Here, analogously to the linear method, {(s,d)} is
the set of overlapping images specified by MappingSourceMappingSourceMappingSourceMappingSourcemappingSource
and MappingDestMappingDestMappingDestMappingDestmappingDest. This method is used for
EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod = 'nonlinear'"nonlinear""nonlinear""nonlinear""nonlinear". To start the
minimization, the camera parameters are initialized with the results
of the linear method. These two methods are very fast and return
acceptable results if the projective 2D transformations
are sufficiently accurate. For this, it
is essential that the images do not have radial distortions. It can
also be seen that in the above two methods the camera parameters are
determined independently from the rotation parameters, and
consequently the possible constraints are not fully exploited. In
particular, it can be seen that it is not enforced that the
projections of the same 3D point lie close to each other in all
images. Therefore, stationary_camera_self_calibrationstationary_camera_self_calibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibration
offers a complete bundle adjustment as a third method
(EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod = 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard"). Here, the
camera parameters and rotations as well as the directions in 3D
corresponding to the image points (denoted by the vectors
X above), are determined in a single optimization
by minimizing the following error:
In this equation, only the terms for which the reconstructed
direction is visible in image i are taken
into account. The starting values for the parameters in the bundle
adjustment are derived from the results of the nonlinear method.
Because of the high complexity of the minimization the bundle
adjustment requires a significantly longer execution time than the
two simpler methods. Nevertheless, because the bundle adjustment
results in significantly better results, it should be preferred.
In each of the three methods the camera parameters that should be
computed can be specified. The remaining parameters are set to a
constant value. Which parameters should be computed is determined
with the parameter CameraModelCameraModelCameraModelCameraModelcameraModel which contains a tuple of
values. CameraModelCameraModelCameraModelCameraModelcameraModel must always contain the value
'focus'"focus""focus""focus""focus" that specifies that the focal length f is
computed. If CameraModelCameraModelCameraModelCameraModelcameraModel contains the value
'principal_point'"principal_point""principal_point""principal_point""principal_point" the principal point (u,v) of the camera
is computed. If not, the principal point is set to
(ImageWidthImageWidthImageWidthImageWidthimageWidth/2,ImageHeightImageHeightImageHeightImageHeightimageHeight/2). If
CameraModelCameraModelCameraModelCameraModelcameraModel contains the value 'aspect'"aspect""aspect""aspect""aspect" the aspect
ratio a of the pixels is determined, otherwise it is set to 1. If
CameraModelCameraModelCameraModelCameraModelcameraModel contains the value 'skew'"skew""skew""skew""skew" the skew of
the image axes is determined, otherwise it is set to 0. Only the
following combinations of the parameters are allowed:
'focus'"focus""focus""focus""focus", ['focus', 'principal_point']["focus", "principal_point"]["focus", "principal_point"]["focus", "principal_point"]["focus", "principal_point"],
['focus', 'aspect']["focus", "aspect"]["focus", "aspect"]["focus", "aspect"]["focus", "aspect"], ['focus', 'principal_point',
'aspect']["focus", "principal_point",
"aspect"]["focus", "principal_point",
"aspect"]["focus", "principal_point",
"aspect"]["focus", "principal_point",
"aspect"] and ['focus', 'principal_point', 'aspect',
'skew']["focus", "principal_point", "aspect",
"skew"]["focus", "principal_point", "aspect",
"skew"]["focus", "principal_point", "aspect",
"skew"]["focus", "principal_point", "aspect",
"skew"].
When using EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod = 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard" to
determine the principal point, it is possible to penalize
estimations far away from the image center. This can be done by
adding a sigma to the parameter 'principal_point:0.5'"principal_point:0.5""principal_point:0.5""principal_point:0.5""principal_point:0.5". If
no sigma is given the penalty term in the above equation for
calculating the error is omitted.
The number of images that are used for the calibration is passed in
NumImagesNumImagesNumImagesNumImagesnumImages. Based on the number of images, several
constraints for the camera model must be observed. If only two
images are used, even under the assumption of constant parameters
not all camera parameters can be determined. In this case, the skew
of the image axes should be set to 0 by not adding
'skew'"skew""skew""skew""skew" to CameraModelCameraModelCameraModelCameraModelcameraModel. If
FixedCameraParamsFixedCameraParamsFixedCameraParamsFixedCameraParamsfixedCameraParams = 'false'"false""false""false""false" is used, the full
set of camera parameters can never be determined, no matter how many
images are used. In this case, the skew should be set to 0 as well.
Furthermore, it should be noted that the aspect ratio can only be
determined accurately if at least one image is rotated around the
optical axis (the z axis of the camera coordinate system) with
respect to the other images. If this is not the case the
computation of the aspect ratio should be suppressed by not
adding 'aspect'"aspect""aspect""aspect""aspect" to CameraModelCameraModelCameraModelCameraModelcameraModel.
List of values: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard", 'linear'"linear""linear""linear""linear", 'nonlinear'"nonlinear""nonlinear""nonlinear""nonlinear"
List of values: 'aspect'"aspect""aspect""aspect""aspect", 'focus'"focus""focus""focus""focus", 'kappa'"kappa""kappa""kappa""kappa", 'principal_point'"principal_point""principal_point""principal_point""principal_point", 'skew'"skew""skew""skew""skew"
If the parameters are valid, the operator
stationary_camera_self_calibrationstationary_camera_self_calibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibrationStationaryCameraSelfCalibration returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating
and Zooming Cameras”; International Journal of Computer Vision;
vol. 45, no. 2; pp. 107--127; 2001.