match_fundamental_matrix_distortion_ransacT_match_fundamental_matrix_distortion_ransacMatchFundamentalMatrixDistortionRansacMatchFundamentalMatrixDistortionRansac — Compute the fundamental matrix and the radial distortion coefficient
for a pair of stereo images by automatically finding correspondences
between image points.
Given a set of coordinates of characteristic points
(Rows1Rows1Rows1Rows1rows1,Cols1Cols1Cols1Cols1cols1) and
(Rows2Rows2Rows2Rows2rows2,Cols2Cols2Cols2Cols2cols2) in the stereo images
Image1Image1Image1Image1image1 and Image2Image2Image2Image2image2, which must be of identical
size, match_fundamental_matrix_distortion_ransacmatch_fundamental_matrix_distortion_ransacMatchFundamentalMatrixDistortionRansacMatchFundamentalMatrixDistortionRansacMatchFundamentalMatrixDistortionRansac
automatically finds the correspondences between the characteristic
points and determines the geometry of the stereo setup. For unknown
cameras the geometry of the stereo setup is represented by the
fundamental matrix FMatrixFMatrixFMatrixFMatrixFMatrix and the radial distortion
coefficient KappaKappaKappaKappakappa. All corresponding points
must fulfill the epipolar constraint:
Here,
denote image points that are obtained by undistorting the input
image points with the division model (see
Calibration / Multi-View):
Here, denote the
distorted image points, specified relative to the image center, and
w and h denote the width and height of the input images. Thus,
match_fundamental_matrix_distortion_ransacmatch_fundamental_matrix_distortion_ransacMatchFundamentalMatrixDistortionRansacMatchFundamentalMatrixDistortionRansacMatchFundamentalMatrixDistortionRansac assumes that the
principal point of the camera, i.e., the center of the radial
distortions, lies at the center of the image.
Note the column/row ordering in the point coordinates above: since
the fundamental matrix encodes the projective relation between two
stereo images embedded in 3D space, the x/y notation must be
compliant with the camera coordinate system. Therefore, (x,y)
coordinates correspond to (column,row) pairs.
The matching process is based on characteristic points, which can be
extracted with point operators like points_foerstnerpoints_foerstnerPointsFoerstnerPointsFoerstnerPointsFoerstner or
points_harrispoints_harrisPointsHarrisPointsHarrisPointsHarris. The matching itself is carried out in two
steps: first, gray value correlations of mask windows around the
input points in the first and the second image are determined and an
initial matching between them is generated using the similarity of
the windows in both images. Then, the RANSAC algorithm is applied
to find the fundamental matrix and radial distortion coefficient
that maximizes the number of correspondences under the epipolar
constraint.
To increase the speed of the algorithm the search area for the match
candidates can be limited to a rectangle by specifying its size and
offset. Only points within a window of points are considered. The offset of the
center of the search window in the second image with respect to the
position of the current point in the first image is given by
RowMoveRowMoveRowMoveRowMoverowMove and ColMoveColMoveColMoveColMovecolMove.
If the second camera is rotated around the optical axis with respect
to the first camera, the parameter RotationRotationRotationRotationrotation may contain an
estimate for the rotation angle or an angle interval in radians. A
good guess will increase the quality of the gray value matching. If
the actual rotation differs too much from the specified estimate,
the matching will typically fail. In this case, an angle interval
should be specified and RotationRotationRotationRotationrotation is a tuple with two
elements. The larger the given interval is the slower is the
operator is since the RANSAC algorithm is run over all
(automatically determined) angle increments within the interval.
The parameter EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod decides whether the relative
orientation between the cameras is of a special type and which
algorithm is to be applied for its computation. If
EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod is either 'linear'"linear""linear""linear""linear" or
'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard", the relative orientation is arbitrary. If
the left and right cameras are identical and the relative
orientation between them is a pure translation,
EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod can be set to 'trans_linear'"trans_linear""trans_linear""trans_linear""trans_linear" or
'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard". The typical application for this
special motion case is the scenario of a single fixed camera looking
onto a moving conveyor belt. In order to get a unique solution for
the correspondence problem, the minimum required number of
corresponding points is nine in the general case and four in the
special translational case.
The fundamental matrix is computed by a linear algorithm if
EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod is set to 'linear'"linear""linear""linear""linear" or
'trans_linear'"trans_linear""trans_linear""trans_linear""trans_linear". This algorithm is very fast. For the pure
translation case (EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod =
'trans_linear'"trans_linear""trans_linear""trans_linear""trans_linear"), the linear method returns accurate results
for small to moderate noise of the point coordinates and for most
distortions (except for very small distortions). For a general
relative orientation of the two cameras (EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod
= 'linear'"linear""linear""linear""linear"), the linear method only returns accurate
results for very small noise of the point coordinates and for
sufficiently large distortions. For EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod =
'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", a
mathematically optimal but slower optimization is used, which
minimizes the geometric reprojection error of reconstructed
projective 3D points. For a general relative orientation of the two
cameras, in general EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod =
'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard" should be selected.
The value ErrorErrorErrorErrorerror indicates the overall quality of the
estimation procedure and is the mean symmetric euclidian distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the above constraints are considered to
be corresponding points. Points1Points1Points1Points1points1 contains the indices of
the matched input points from the first image and Points2Points2Points2Points2points2
contains the indices of the corresponding points in the second
image.
List of values: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard", 'linear'"linear""linear""linear""linear", 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", 'trans_linear'"trans_linear""trans_linear""trans_linear""trans_linear"
DistanceThresholdDistanceThresholdDistanceThresholdDistanceThresholddistanceThreshold (input_control) number →HTupleHTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)
Maximal deviation of a point from its epipolar line.
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in
Computer Vision”; Cambridge University Press, Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple
Images: The Laws That Govern the Formation of Multiple Images of a
Scene and Some of Their Applications”; MIT Press, Cambridge, MA;
2001.