vector_to_rel_poseT_vector_to_rel_poseVectorToRelPoseVectorToRelPosevector_to_rel_pose — Compute the relative orientation between two cameras given image point
correspondences and known camera parameters and reconstruct 3D space points.
For a stereo configuration with known camera parameters the geometric
relation between the two images is defined by the relative pose.
The operator vector_to_rel_posevector_to_rel_poseVectorToRelPoseVectorToRelPoseVectorToRelPosevector_to_rel_pose computes the relative pose from
in general at least six point correspondences in the image pair.
RelPoseRelPoseRelPoseRelPoserelPoserel_pose indicates the relative pose of camera 1 with respect
to camera 2 (see create_posecreate_poseCreatePoseCreatePoseCreatePosecreate_pose for more information about
poses and their representations.). This is in accordance with the
explicit calibration of a stereo setup using the operator
calibrate_camerascalibrate_camerasCalibrateCamerasCalibrateCamerasCalibrateCamerascalibrate_cameras.
Now, let R,t be the rotation and translation
of the relative pose. Then, the essential matrix
E is defined as , where
denotes the 3x3 skew-symmetric
matrix realizing the cross product with the vector t.
The pose can be determined from the epipolar constraint:
Note, that the essential matrix is a projective entity and thus is
defined up to a scaling factor. From this follows that the
translation vector of the relative pose can only be determined up to
scale too. In fact, the computed translation vector will always be
normalized to unit length. As a consequence, a three-dimensional
reconstruction of the scene, here in terms of points given by their
coordinates (XXXXxx,YYYYyy,ZZZZzz), can be carried
out only up to a single global scaling factor. If the absolute 3D
coordinates of the reconstruction are to be achieved the unknown
scaling factor can be computed from a gauge, which has to be visible
in both images. For example, a simple gauge can be given by any
known distance between points in the scene.
The parameter MethodMethodMethodMethodmethodmethod decides whether the relative orientation
between the cameras is of a special type and which algorithm is to be applied
for its computation.
If MethodMethodMethodMethodmethodmethod is either 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or
'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" the relative orientation is arbitrary.
Choosing 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard"
means that the relative motion between the cameras is a pure translation.
The typical application for this special motion case is the
scenario of a single fixed camera looking onto a moving conveyor belt.
In this case the minimum required number of corresponding points is just two
instead of six in the general case.
The relative pose is computed by a linear algorithm if
'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" is chosen.
With 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard"
the algorithm gives a statistically optimal result.
Here, 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" and 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" stand for
direct-linear-transformation and gold-standard-algorithm respectively.
All methods return the coordinates (XXXXxx,YYYYyy,ZZZZzz)
of the reconstructed 3D points. The optimal methods also return
the covariances of the 3D points in CovXYZCovXYZCovXYZCovXYZcovXYZcov_xyz.
Let n be the number of points
then the 3x3 covariance matrices are concatenated and
stored in a tuple of length 9n.
Additionally, the optimal methods return the 6x6 covariance
matrix of the pose CovRelPoseCovRelPoseCovRelPoseCovRelPosecovRelPosecov_rel_pose.
The value ErrorErrorErrorErrorerrorerror indicates the overall quality of the optimization
process and is the root-mean-square Euclidian distance in pixels between the
points and their corresponding epipolar lines.
For the operator vector_to_rel_posevector_to_rel_poseVectorToRelPoseVectorToRelPoseVectorToRelPosevector_to_rel_pose a special configuration
of scene points and cameras exists: if all 3D points lie in a single plane
and additionally are all closer to one of the two cameras then the solution
in the relative pose is not unique but twofold. As a consequence both
solutions are computed and returned by the operator.
This means that all output parameters are of double length and the values
of the second solution are simply concatenated behind the values of the
first one.
Execution Information
Multithreading type: reentrant (runs in parallel with non-exclusive operators).
Multithreading scope: global (may be called from any thread).
List of values: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard", 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt", 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt"
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in
Computer Vision”; Cambridge University Press, Cambridge; 2003.
J.Chris McGlone (editor): “Manual of Photogrammetry”;
American Society for Photogrammetry and Remote Sensing ; 2004.