find_surface_model
— Find the best matches of a surface model in a 3D scene.
find_surface_model( : : SurfaceModelID, ObjectModel3D, RelSamplingDistance, KeyPointFraction, MinScore, ReturnResultHandle, GenParamName, GenParamValue : Pose, Score, SurfaceMatchingResultID)
The operator find_surface_model
finds the best matches of
the surface model SurfaceModelID
in the 3D scene
ObjectModel3D
and returns their pose in Pose
.
The matching is divided in three steps:
Approximate matching
Sparse pose refinement
Dense pose refinement
These steps and their corresponding generic parameters are described in more detail in a separate paragraph below. The further paragraphs describe the parameters and mention points to note.
The matching process and the parameters can be visualized and inspected using the HDevelop procedure debug_find_surface_model.
Matching the surface model uses points and normals of the 3D scene
ObjectModel3D
.
The scene shall provide one of the following options:
points and point normals.
points and a 2D-Mapping, e.g. an XYZ image triple converted with
xyz_to_object_model_3d
.
In this case the normals are calculated using the 2D-Mapping.
points only. The normals are estimated based on the 3D neighborhood. Note, this option is not recommended, since it generally leads to a longer processing time and additionally the computed normals are usually less accurate, leading to less accurate results.
It is important for an accurate Pose
that the normals of the scene
and the model point in the same direction (see
'scene_invert_normals' ).
If the model was trained for edge-supported surface-based matching, only the
second combination is possible, i.e., the scene must contain a 2D mapping.
Further, for such models it is necessary that the normal vectors point
inwards.
Note that triangles or polygons in the passed scene are ignored.
Instead, only the vertices are used for matching.
It is thus in general not recommended to use this operator on meshed scenes,
such as CAD data.
Instead, such a scene must be sampled beforehand using
sample_object_model_3d
to create points and normals
(e.g., using the method 'fast_compute_normals' ).
When using noisy point clouds, e.g. from time-of-flight cameras, the generic parameter 'scene_normal_computation' should be set to 'mls' in order to obtain more robust results (see below).
SurfaceModelID
is the handle of the surface model.
The model must have been created previously with
create_surface_model
or read in with read_surface_model
,
respectively.
Certain surface model parameters influencing the matching can be set using
set_surface_model_param
, such as
'pose_restriction_max_angle_diff' restricting the allowed range
of rotations.
ObjectModel3D
is the handle of the 3D object model containing the
scene in which the matches are searched.
Note, it is assumed the scene was observed from a camera looking along the
z-axis. This is important to align the scene normals if they are re-computed
(see below).
The parameter RelSamplingDistance
controls the sampling distance
during the step Approximate matching
and the Score
calculation during the step Sparse pose refinement
.
Its value is given relative to the diameter of the surface model.
Decreasing RelSamplingDistance
leads to more sampled points,
and in turn to a more stable but slower matching. Increasing
RelSamplingDistance
reduces the number of sampled scene
points, which leads to a less stable but faster matching.
For an illustration showing different values for
RelSamplingDistance
, please refer to the operator
create_surface_model
.
The sampled scene points can be retrieved for a visual inspection
using the operator get_surface_matching_result
.
For a robust matching it is recommended that at least 50-100
scene points are sampled for each object instance.
The parameter KeyPointFraction
controls how many
points out of the sampled scene points are selected as key points.
For example, if the value is set to 0.1, 10% of the sampled
scene points are used as key points.
For stable results it is important that each instance of the object
is covered by several key points.
Increasing KeyPointFraction
means that more key points
are selected from the scene, resulting in a slower but more stable
matching. Decreasing KeyPointFraction
has the inverse effect
and results in a faster but less stable matching.
The operator get_surface_matching_result
can be used to
retrieve the selected key points for visual inspection.
The parameter MinScore
can be used to filter the results.
Only matches with a score exceeding the value of MinScore
are
returned.
If MinScore
is set to zero, all matches are returned.
For edged-supported surface-based matching (see
create_surface_model
) four different sub-scores are determined
(see their explanation below). For surface-based matching models where
view-based score computation is trained (see create_surface_model
),
an additional fifth sub-score is determined.
As a consequence, you can filter the results based on each of them by
passing a tuple with up to five threshold values to MinScore
.
These threshold values are sorted in the order of the scores (see below)
and missing entries are regarded as 0, meaning no filtering based
on this sub-score.
To find suitable values for the thresholds, the corresponding sub-scores
of found object instances can be obtained using
get_surface_matching_result
.
Depending on the settings, not all sub-scores might be available.
The thresholds for unavailable sub-scores are ignored.
The five sub-scores, whose threshold values have to be passed in exactly
this order in MinScore
, are:
The overall score as returned in Score
and
through 'score' by get_surface_matching_result
,
the surface fraction of the score, i.e., how much of the object's
surface was detected in the scene, returned
through 'score_surface' by get_surface_matching_result
,
the 3D edge fraction of the score, i.e., how well the 3D edges of
the object are aligned with the 3D edges detected in the scene
returned through 'score_3d_edges' by
get_surface_matching_result
,
the 2D edge fraction of the score, i.e., how well the
object silhouette projected into the images aligns with edges
detected in the images (available only for the operators
find_surface_model_image
and
refine_surface_model_pose_image
), returned through
'score_2d_edges' by get_surface_matching_result
, and
the view-based score, i.e., how many model points were detected in the
scene, in relation to how many of the object points are potentially
visible from the determined viewpoint, returned through
'score_view_based' by get_surface_matching_result
.
The parameter ReturnResultHandle
determines if a
surface matching result handle is returned or not.
If the parameter is set to 'true' , the handle is returned in
the parameter SurfaceMatchingResultID
.
Additional details of the matching
process can be queried with the operator
get_surface_matching_result
using that handle.
The parameters GenParamName
and GenParamValue
are used
to set generic parameters. Both get a tuple of equal length, where the
tuple passed to GenParamName
contains the names of the
parameters to set, and the tuple passed to GenParamValue
contains
the corresponding values. The possible parameter names and values
are described in the paragraph The three steps of the matching
.
The output parameter Pose
gives the 3D poses of the found object
instances. For every found instance of the surface model its pose is given
in the scene coordinate system, thus the pose is in the form
, where
scs denote the coordinate system of the scene (which often is
identical with the coordinate system of the sensor, the camera coordinate
system) and mcs the model coordinate system (which is a 3D world
coordinate system), see Transformations / Poses and
“Solution Guide III-C - 3D Vision”
.
Thereby, the pose refers to the original coordinate system of the 3D object
model that was passed to create_surface_model
.
The output parameter Score
returns a score for each match.
Its value and interpretation differs for the cases distinguished below.
With pose refinement
For a matching with pose refinement, the score depends on whether edge-support was activated:
Without edge-support, compute the surface fraction, i.e. the approximate fraction of the object's surface that is visible in the scene. This is done by counting the number of model points that have a corresponding scene point and dividing this number either by:
the total number of points on the model, if the surface-based model is not prepared for view-based score computation
or by:
the maximum number of potentially visible model points based on the current viewpoint, if the surface-based model is prepared for view-based score computation.
0 <= Score <= 1
With edge-support, compute the geometric mean of the surface fraction and the edge fraction. The surface fraction is affected by whether the surface-based model is prepared for view-based score computation or not, as explained above. The edge fraction is the number of points from the sampled model edges that are aligned with edges of the scene, divided by the maximum number of potentially visible points of edges on the model. Note that if the edges are extracted from multiple viewpoints, this might lead to score greater than 1.
0 <= Score <= 1 (if the scene was acquired from one single viewpoint)
0 <= Score <= N (if the scene was merged from scenes that were acquired from N different viewpoints)
Note that for the computation of the score after the sparse pose refinement, the sampled scene points are used. For the computation of the score after the dense pose refinement, all scene points are used. Therefore, after the dense pose refinement, the score values does not depend on the sampling distance of the scene.
Without pose refinement
If only the first step, Approximate Matching
, out of the three
steps described in The three steps of the matching
takes place,
the possible score value and interpretation only differs whether there is
edge-support or not:
Without edge-support:
The score is the approximate number of points from the subsampled scene that lie on the found object.
Score >= 0
With edge-support:
The score is the approximate number of points from the subsampled scene that lie on the found object multiplied with the number of points from the sampled scene edges that are aligned with edges of the model.
Score >= 0
The output parameter SurfaceMatchingResultID
returns a handle
for the surface matching result. Using this handle, additional details of
the matching process can be queried with the operator
get_surface_matching_result
.
Note, that in order to return the handle, ReturnResultHandle
has to be set to 'true' .
The matching is divided into three steps:
The approximate poses of the instances of the surface model in the scene are searched.
First, points are sampled uniformly from the scene passed in
ObjectModel3D
.
The sampling distance is controlled with the parameter
RelSamplingDistance
.
Then, a set of key points is selected from the sampled scene points.
The number of selected key points is controlled with the
parameter KeyPointFraction
.
For each selected key point, the optimum pose of the
surface model is computed under the assumption that the key
point lies on the surface of the object. This is done by
pairing the key point with all other sampled scene points
and finding the point pairs on the surface model that have a
similar distance and relative orientation. The similarity is defined
by the parameters 'feat_step_size_rel' and
'feat_angle_resolution' in create_surface_model
.
The pose for which the largest number of points from the sampled scene
lie on the object is considered to be the best pose for this
key point. The number of sampled scene points on the object
is considered to be the score of the pose.
If the model was trained for edge-supported surface-based matching,
edges are extracted from the 3D scene, similar to the operator
edges_object_model_3d
, and sampled.
In addition to the sampled 3D surface, the reference points are then
also paired with all sampled edge points and finding similar
point-edge-combinations on the surface model.
The score is then recomputed by multiplying the number of matching
sampled edge points with the number of matching sampled scene points,
and the best pose is extracted as described above.
From all key points the poses with the best scores are then
selected and used as approximate poses.
The maximum number of returned poses is set with the generic parameter
'num_matches' .
If the pose refinement is disabled, the score described
above is returned for each pose in Score
.
The value of the score depends on the amount of surface of the
instance that is visible in the scene and on the sampling rate of the
scene. Only poses whose score exceeds MinScore
are returned.
To determine a good threshold for MinScore
, it is recommended
to test the matching on several scenes.
Note that the resulting poses from this step are only approximate.
The error in the pose is proportional to the sampling rates of the
surface model given in create_surface_model
, and is typically
less than 5% of the object's diameter.
The following generic parameters control the approximate matching
and can be set with GenParamName
and
GenParamValue
:
Sets the maximum number of matches that are returned.
Suggested values: 1, 2, 5
Default value: 1
Assertion: 'num_matches' > 0
For efficiency reasons, the maximum overlap can not be defined
in 3D. Instead, only the minimum distance between the centers
of the axis-aligned bounding boxes of two matches can be
specified with 'max_overlap_dist_rel'. The value is set
relative to the diameter of the object. Once an object with a
high Score
is found, all other matches are suppressed
if the centers of their bounding boxes lie too close to the
center of the first object. If the resulting matches must
not overlap, the value for 'max_overlap_dist_rel'
should be set to 1.0.
Note that only one of the parameters 'max_overlap_dist_rel' and 'max_overlap_dist_abs' should be set. If both are set, only the value of the last modified parameter is used.
Suggested values: 0.1, 0.5, 1
Default value: 0.5
Assertion: 'max_overlap_dist_rel' >= 0
This parameter has the same effect as the parameter 'max_overlap_dist_rel'. Note that in contrast to 'max_overlap_dist_rel', the value for 'max_overlap_dist_abs' is set as an absolute value. See 'max_overlap_dist_rel', above, for a description of the effect of this parameter.
Note that only one of the parameters 'max_overlap_dist_rel' and 'max_overlap_dist_abs' should be set. If both are set, only the value of the last modified parameter is used.
Suggested values: 1, 2, 3
Assertion: 'max_overlap_dist_abs' >= 0
This parameter controls the normal computation of the sampled
scene.
In the default mode 'fast' , normals are computed based on
a small neighborhood of points. If the 3D scene already contains
normals, these are used.
In the mode 'mls' , normals are computed based on a larger
neighborhood and using the more complex but more accurate
'mls' method. In this mode, the normals of the sampled
scene are computed anew, regardless whether it already contains
normals or not.
A more detailed description of the 'mls' method can be
found in the description of the operator
surface_normals_object_model_3d
.
The 'mls' mode is intended for noisy data, such as images
from time-of-flight cameras.
The (re-)computed normals are oriented as the original
normals or such that in case no original
normals exist.
This orientation of implies the assumption
that the scene was observed from a camera looking along the
-axis.
Value list: 'fast' , 'mls'
Default value: 'fast'
Invert the orientation of the surface normals of the scene. The orientation of surface normals of the scene have to match with the orientation of the model. If both the model and the scene are acquired with the same setup, the normals will already point in the same direction. If you experience the effect that the model is found on the 'outside' of the scene surface, try to set this parameter to 'true' . Also, make sure that the normals in the scene all point either outward or inward, i.e., are oriented consistently. For edge-supported surface-based matching, the normal vectors have to point inwards. The normal direction is irrelevant for the pose refinement of the surface model.
Possible values: 'false' , 'true'
Default value: 'false'
Allows to manually set the 3D scene edges for edge-supported
surface-based matching, i.e. if the surface model was created with
'train_3d_edges' enabled. The parameter must be a 3D object
model handle.
The edges are usually a result of the operator
edges_object_model_3d
but can further be filtered in order
to remove outliers.
If this parameter is not given, find_surface_model
will
internally extract the edges similar to the operator
edges_object_model_3d
.
Sets the threshold when extracting 3D edges for edge-supported
surface-based matching, i.e. if the surface model was created with
'train_3d_edges' enabled.
The threshold is set relative to the diameter of the object.
Note that if edges were passed manually with the generic parameter
'3d_edges' , this parameter is ignored.
Otherwise, it behaves identically to the parameter
'MinAmplitude' of operator edges_object_model_3d
.
Suggested values: 0.05, 0.1, 0.5
Default value: 0.05
Assertion: '3d_edge_min_amplitude_rel' >= 0
Similar to '3d_edge_min_amplitude_rel' , however, the value is given as absolute distance and not relative to the object diameter.
Assertion: '3d_edge_min_amplitude_abs' >= 0
This parameter specifies the viewpoint from which the 3D data is
seen.
It is used to determine the viewing directions and edge directions
as far as the surface model was created with
'train_3d_edges' enabled. It is also used for surface models
that are prepared for view-based score computation
(i.e. with 'train_view_based' enabled) to get the maximum number
of potentially visible points of the model based on the current
viewpoint.
For this, GenParamValue
must contain a string consisting of
the three coordinates (x, y, and z) of the viewpoint, separated by
spaces. The viewpoint is defined in the same coordinate frame as
ObjectModel3D
.
Note that if edges were passed manually with the generic parameter
'3d_edges', this parameter is ignored.
Otherwise, it behaves identically to the parameter
GenParamName
of the operator edges_object_model_3d
when 'viewpoint' is set.
To improve the result of the edge-supported surface-based matching the viewing point should roughly correspond to the position the scene was acquired from.
A visualization of the viewpoint can be created using the procedure
debug_find_surface_model
in order to inspect its position.
Default value: '0 0 0'
Gaps in the 3D data are closed, as far as they do not exceed the
maximum gap size 'max_gap' [pixels] and the surface model
was created with 'train_3d_edges' enabled.
Larger gaps will contain edges at their boundary, while gaps smaller
than this value will not.
This suppresses edges around smaller patches that were not
reconstructed by the sensor as well as edges at the more
distant part of a discontinuity.
For sensors with very large resolutions, the value should be
increased to avoid spurious edges.
Note that if edges were passed manually with the generic parameter
'3d_edges', this parameter is ignored.
Otherwise, it behaves identically to the parameter
GenParamName
of the operator edges_object_model_3d
when 'max_gap' is set.
The influence of 'max_gap' can be inspected using the
procedure debug_find_surface_model
.
Default value: 30
Turns the edge-supported matching on or off. This can be used to perform matching without 3D edges, even though the model was created for edge-supported matching. If the model was not created for edge-supported surface-based matching, an error is returned.
Value list: 'true' , 'false'
Default value: 'true'
In this second step, the approximate poses found in the previous step are further refined. This increases the accuracy of the poses and the significance of the score value.
The sparse pose refinement uses the sampled scene points from the approximate matching. The pose is optimized such that the distances from the sampled scene points to the plane of the closest model point are minimal. The plane of each model point is defined as the plane perpendicular to its normal.
Additionally, if the model was trained for edge-supported surface-based matching and it was not disabled using the parameter 'use_3d_edges' (see above), the pose is also optimized such that the sampled edge points in the scene align with the edges of the surface model.
The sparse pose refinement is enabled by default. It can be disabled by setting the generic parameter 'sparse_pose_refinement' to 'false' . Since each key point produces one pose candidate, the total number of pose candidates to be optimized is proportional to the number of key points. For large scenes with much clutter, i.e., scene parts that do not belong to the object of interest, it can be faster to disable the sparse pose refinement.
The score of each pose is recomputed after the sparse pose refinement.
The following generic parameters control the sparse pose refinement
and can be set with GenParamName
and
GenParamValue
:
Enables or disables the sparse pose refinement.
Value list: 'true' , 'false'
Default value: 'true'
Enables or disables the usage of scene normals for the pose refinement. If this parameter is enabled, and if the scene contains point normals, then those normals are used to increase the accuracy of the pose refinement. For this, the influence of scene points whose normal points in a different direction than the model normal is decreased. Note that the scene must contain point normals. Otherwise, this parameter is ignored.
Value list: 'true' , 'false'
Default value: 'false'
Turns the view-based score computation for surface-based matching on or
off. This can be used to perform matching without using the view-based
score, even though the model was prepared for view-based score
computation. The influence of 'use_view_based' on the score is
explained in the documentation of Score
above.
If the model was not prepared for view-based score computation, an error is returned.
Value list: 'true' , 'false'
Default value: 'false' , if 'train_view_based' was disabled when creating the model, otherwise 'true' .
Accurately refines the poses found in the previous steps. This step works similar to the sparse pose refinement and minimizes the distances between the scene points and the planes of the closest model points. The difference is that
only the 'num_matches' poses with the best scores from the previous step are refined;
all points from the scene passed in ObjectModel3D
are
used for the refinement.
if the model was created for edge-supported surface-based matching and it was not disabled using the parameter 'use_3d_edges' (see above), all extracted scene edge points are used for the refinement, instead of only the sampled edge points.
Taking all points from the scene increases the accuracy of the refinement but is slower than refining on the subsampled scene points. The dense pose refinement is enabled by default, but can be disabled with the generic parameter 'dense_pose_refinement' .
After the dense pose refinement, the score of each match is recomputed. The threshold for considering a point to be 'on' the object is set with the generic parameter 'pose_ref_scoring_dist_rel' or 'pose_ref_scoring_dist_abs' (see below). When using the edge-supported matching, the parameters 'pose_ref_scoring_dist_edges_rel' or 'pose_ref_scoring_dist_edges_abs' control the corresponding thresholds for edges.
The final accuracy of the refined pose depends on several factors. The internal refinement algorithm has an accuracy of up to 1e-7 times the size (diameter) of the model. This maximal accuracy is only achieved for best possible conditions. These further factors for the final accuracy are the shape of the model, the number of scene points, the noise of the scene points, the visible part of the object instance, and the position of the object.
The following generic parameters influence the accuracy and speed of
the dense pose refinement and can be set with GenParamName
and GenParamValue
:
Enables or disables the dense pose refinement.
Value list: 'true' , 'false'
Default value: 'true'
Number of iterations for the dense pose refinement. Increasing the number of iteration leads to a more accurate pose at the expense of runtime. However, once convergence is reached, the accuracy can no longer be increased, even if the number of steps is increased. Note that this parameter is ignored if the dense pose refinement is disabled.
Suggested values: 1, 3, 5, 20
Default value: 5
Assertion: 'pose_ref_num_steps' > 0
Set the rate of scene points to be used for the dense pose refinement. For example, if this value is set to 5, every 5th point from the scene is used for pose refinement. This parameter allows an easy trade-off between speed and accuracy of the pose refinement: Increasing the value leads to less points being used and in turn to a faster but less accurate pose refinement. Decreasing the value has the inverse effect. Note that this parameter is ignored if the dense pose refinement is disabled.
Suggested values: 1, 2, 5, 10
Default value: 2
Assertion: 'pose_ref_sub_sampling' > 0
Set the distance threshold for dense pose refinement relative to the diameter of the surface model. Only scene points that are closer to the object than this distance are used for the optimization. Scene points further away are ignored.
Note that only one of the parameters 'pose_ref_dist_threshold_rel' and 'pose_ref_dist_threshold_abs' should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled.
Suggested values: 0.03, 0.05, 0.1, 0.2
Default value: 0.1
Assertion: 0 < 'pose_ref_dist_threshold_rel'
Set the distance threshold for dense pose refinement as an absolute value. See 'pose_ref_dist_threshold_rel' for a detailed description.
Note that only one of the parameters 'pose_ref_dist_threshold_rel' and 'pose_ref_dist_threshold_abs' should be set. If both are set, only the value of the modified last parameter is used.
Assertion: 0 < 'pose_ref_dist_threshold_abs'
Set the distance threshold for scoring relative to the diameter of the surface model. See the following 'pose_ref_scoring_dist_abs' for a detailed description.
Note that only one of the parameters 'pose_ref_scoring_dist_rel' and 'pose_ref_scoring_dist_abs' should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled.
Suggested values: 0.2, 0.01, 0.005, 0.0001
Default value: 0.005
Assertion: 0 < 'pose_ref_scoring_dist_rel'
Set the distance threshold for scoring. Only scene points that are closer to the object than this distance are considered to be 'on the model' when computing the score after the pose refinement. All other scene points are considered not to be on the model. The value should correspond to the amount of noise on the coordinates of the scene points. Note that this parameter is ignored if the dense pose refinement is disabled.
Note that only one of the parameters 'pose_ref_scoring_dist_rel' and 'pose_ref_scoring_dist_abs' should be set. If both are set, only the value of the last modified parameter is used.
Enables or disables the usage of scene normals for the pose
refinement.
This parameter is explained in more details in the section
Sparse pose refinment
above.
Value list: 'true' , 'false'
Default value: 'false'
Set the distance threshold of edges for dense pose refinement relative to the diameter of the surface model. Only scene edges that are closer to the object edges than this distance are used for the optimization. Scene edges further away are ignored.
Note that only one of the parameters 'pose_ref_dist_threshold_edges_rel' and 'pose_ref_dist_threshold_edges_abs' should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if no edge-supported surface-based matching is used.
Suggested values: 0.03, 0.05, 0.1, 0.2
Default value: 0.1
Assertion: 0 < 'pose_ref_dist_threshold_edges_rel'
Set the distance threshold of edges for dense pose refinement as an absolute value. See 'pose_ref_dist_threshold_edges_rel' for a detailed description.
Note that only one of the parameters 'pose_ref_dist_threshold_edges_rel' and 'pose_ref_dist_threshold_edges_abs' should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if no edge-supported surface-based matching is used.
Assertion: 0 < 'pose_ref_dist_threshold_edges_abs'
Set the distance threshold of edges for scoring relative to the diameter of the surface model. See the following 'pose_ref_scoring_dist_edges_abs' for a detailed description.
Note that only one of the parameters 'pose_ref_scoring_dist_edges_rel' and 'pose_ref_scoring_dist_edges_abs' should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if no edge-supported surface-based matching is used.
Suggested values: 0.2, 0.01, 0.005, 0.0001
Default value: 0.005
Assertion: 0 < 'pose_ref_scoring_dist_edges_rel'
Set the distance threshold of edges for scoring as an absolute value. Only scene edges that are closer to the object edges than this distance are considered to be 'on the model' when computing the score after the pose refinement. All other scene edges are considered not to be on the model. The value should correspond to the expected inaccuracy of the extracted scene edges and the inaccuracy of the refined pose.
Note that only one of the parameters 'pose_ref_scoring_dist_edges_rel' and 'pose_ref_scoring_dist_edges_abs' should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if no edge-supported surface-based matching is used.
Assertion: 0 < 'pose_ref_scoring_dist_edges_abs'
Turns the view-based score computation for surface-based matching on or off. For further details, see the respective description in the section about the sparse pose refinement above.
If the model was not prepared for view-based score computation, an error is returned.
Value list: 'true' , 'false'
Default value: 'false' , if 'train_view_based' was disabled when creating the model, otherwise 'true' .
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.
This operator supports cancelling timeouts and interrupts.
SurfaceModelID
(input_control) surface_model →
(handle)
Handle of the surface model.
ObjectModel3D
(input_control) object_model_3d →
(handle)
Handle of the 3D object model containing the scene.
RelSamplingDistance
(input_control) real →
(real)
Scene sampling distance relative to the diameter of the surface model.
Default value: 0.05
Suggested values: 0.1, 0.07, 0.05, 0.04, 0.03
Restriction: 0 < RelSamplingDistance < 1
KeyPointFraction
(input_control) real →
(real)
Fraction of sampled scene points used as key points.
Default value: 0.2
Suggested values: 0.3, 0.2, 0.1, 0.05
Restriction: 0 < KeyPointFraction <= 1
MinScore
(input_control) real(-array) →
(real / integer)
Minimum score of the returned poses.
Default value: 0
Restriction: MinScore >= 0
ReturnResultHandle
(input_control) string →
(string)
Enable returning a result handle in
SurfaceMatchingResultID
.
Default value: 'false'
Suggested values: 'true' , 'false'
GenParamName
(input_control) attribute.name-array →
(string)
Names of the generic parameters.
Default value: []
List of values: '3d_edge_min_amplitude_abs' , '3d_edge_min_amplitude_rel' , '3d_edges' , 'dense_pose_refinement' , 'max_gap' , 'max_overlap_dist_abs' , 'max_overlap_dist_rel' , 'num_matches' , 'pose_ref_dist_threshold_abs' , 'pose_ref_dist_threshold_edges_abs' , 'pose_ref_dist_threshold_edges_rel' , 'pose_ref_dist_threshold_rel' , 'pose_ref_num_steps' , 'pose_ref_scoring_dist_abs' , 'pose_ref_scoring_dist_edges_abs' , 'pose_ref_scoring_dist_edges_rel' , 'pose_ref_scoring_dist_rel' , 'pose_ref_sub_sampling' , 'pose_ref_use_scene_normals' , 'scene_invert_normals' , 'scene_normal_computation' , 'sparse_pose_refinement' , 'use_3d_edges' , 'use_view_based' , 'viewpoint'
GenParamValue
(input_control) attribute.value-array →
(string / real / integer)
Values of the generic parameters.
Default value: []
Suggested values: 0, 1, 'true' , 'false' , 0.005, 0.01, 0.03, 0.05, 0.1, 'num_scene_points' , 'model_point_fraction' , 'num_model_points' , 'fast' , 'mls'
Pose
(output_control) pose(-array) →
(real / integer)
3D pose of the surface model in the scene.
Score
(output_control) real-array →
(real)
Score of the found instances of the surface model.
SurfaceMatchingResultID
(output_control) surface_matching_result(-array) →
(handle)
Handle of the matching result, if enabled in
ReturnResultHandle
.
find_surface_model
returns 2 (H_MSG_TRUE) if all parameters are
correct. If necessary, an exception is raised.
read_object_model_3d
,
xyz_to_object_model_3d
,
get_object_model_3d_params
,
read_surface_model
,
create_surface_model
,
get_surface_model_param
,
edges_object_model_3d
refine_surface_model_pose
,
get_surface_matching_result
,
clear_surface_matching_result
,
clear_object_model_3d
refine_surface_model_pose
,
find_surface_model_image
,
refine_surface_model_pose_image
refine_surface_model_pose
,
find_surface_model_image
3D Metrology