match_essential_matrix_ransacT_match_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansacmatch_essential_matrix_ransac — Compute the essential matrix for a pair of stereo images by automatically
finding correspondences between image points.
The operator match_essential_matrix_ransacmatch_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansacMatchEssentialMatrixRansacmatch_essential_matrix_ransac is designed to deal with
a linear camera model.
The internal camera parameters are passed by the arguments
CamMat1CamMat1CamMat1CamMat1camMat1cam_mat_1 and CamMat2CamMat2CamMat2CamMat2camMat2cam_mat_2, which are
3x3 upper triangular matrices describing an affine
transformation. The relation between a vector (X,Y,1), representing the
direction from the camera to the viewed 3D space point and its (projective)
2D image coordinates (col,row,1) is:
The matching process is based on characteristic points, which can be
extracted with point operators like points_foerstnerpoints_foerstnerPointsFoerstnerPointsFoerstnerPointsFoerstnerpoints_foerstner or
points_harrispoints_harrisPointsHarrisPointsHarrisPointsHarrispoints_harris.
The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first
and the second image are determined and an initial matching between
them is generated using the similarity of the windows in both images.
Then, the RANSAC algorithm is applied to find the essential matrix
that maximizes the number of correspondences under the epipolar constraint.
To increase the speed of the algorithm, the search area for the
matching operations can be limited. Only points within a window of points are considered. The offset of the
center of the search window in the second image with respect to the
position of the current point in the first image is given by
RowMoveRowMoveRowMoveRowMoverowMoverow_move and ColMoveColMoveColMoveColMovecolMovecol_move.
If the second camera is
rotated around the optical axis with respect to the first camera
the parameter RotationRotationRotationRotationrotationrotation may contain an estimate for the
rotation angle or an angle interval in radians. A good guess will
increase the quality of the gray value matching. If the actual
rotation differs too much from the specified estimate the matching
will typically fail. In this case, an angle interval should be
specified, and RotationRotationRotationRotationrotationrotation is a tuple with two elements. The
larger the given interval the slower the operator is since the
RANSAC algorithm is run over all angle increments within the
interval.
The parameter EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethodestimation_method decides whether the relative
orientation between the cameras is of a special type and which algorithm is
to be applied for its computation.
If EstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethodestimation_method is either 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or
'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" the relative orientation is arbitrary.
Choosing 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard"
means that the relative motion between the cameras is a pure translation.
The typical application for this special motion case is the
scenario of a single fixed camera looking onto a moving conveyor belt.
In order to get a unique solution in the correspondence problem the minimum
required number of corresponding points is six in the general case and three
in the special, translational case.
The essential matrix is computed by a linear algorithm if
'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" is chosen.
With 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard"
the algorithm gives a statistically optimal result, and returns the
covariance of the essential matrix CovEMatCovEMatCovEMatCovEMatcovEMatcov_emat as well.
Here, 'normalized_dlt' and 'gold_standard' stand for
direct-linear-transformation and gold-standard-algorithm respectively.
Note, that in general the found correspondences differ depending on the
deployed estimation method.
The value ErrorErrorErrorErrorerrorerror indicates the overall quality of the estimation
procedure and is the mean Euclidian distance in pixels between the
points and their corresponding epipolar lines.
For the operator match_essential_matrix_ransacmatch_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansacMatchEssentialMatrixRansacmatch_essential_matrix_ransac a special
configuration of scene points and cameras exists: if all 3D points lie in a
single plane and additionally are all closer to one of the two cameras then
the solution in the essential matrix is not unique but twofold.
As a consequence both solutions are computed and returned by the operator.
This means that the output parameters EMatrixEMatrixEMatrixEMatrixEMatrixematrix, CovEMatCovEMatCovEMatCovEMatcovEMatcov_emat
and ErrorErrorErrorErrorerrorerror are of double length and the values of the second
solution are simply concatenated behind the values of the first one.
List of values: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard", 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt", 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt"
DistanceThresholdDistanceThresholdDistanceThresholdDistanceThresholddistanceThresholddistance_threshold (input_control) number →HTupleUnion[float, int]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)
Maximal deviation of a point from its epipolar line.
Default value: 1
Typical range of values: 0.5
≤
DistanceThresholdDistanceThresholdDistanceThresholdDistanceThresholddistanceThresholddistance_threshold
≤
5
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in
Computer Vision”; Cambridge University Press, Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple
Images: The Laws That Govern the Formation of Multiple Images of a
Scene and Some of Their Applications”; MIT Press, Cambridge, MA;
2001.