optimize_dl_model_for_inferenceT_optimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference (Operator)
Name
optimize_dl_model_for_inferenceT_optimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference
— Optimize a model for inference on a device via the
AI
2-interface.
Signature
void OptimizeDlModelForInference(const HTuple& DLModelHandle, const HTuple& DLDeviceHandle, const HTuple& Precision, const HTuple& DLSamples, const HTuple& GenParam, HTuple* DLModelHandleConverted, HTuple* ConversionReport)
HDlModel HDlModel::OptimizeDlModelForInference(const HDlDeviceArray& DLDeviceHandle, const HString& Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const
HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const HString& Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const
HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const char* Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const
HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const wchar_t* Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const
(
Windows only)
static void HOperatorSet.OptimizeDlModelForInference(HTuple DLModelHandle, HTuple DLDeviceHandle, HTuple precision, HTuple DLSamples, HTuple genParam, out HTuple DLModelHandleConverted, out HTuple conversionReport)
HDlModel HDlModel.OptimizeDlModelForInference(HDlDevice[] DLDeviceHandle, string precision, HDict[] DLSamples, HDict genParam, out HDict conversionReport)
HDlModel HDlModel.OptimizeDlModelForInference(HDlDevice DLDeviceHandle, string precision, HDict[] DLSamples, HDict genParam, out HDict conversionReport)
Description
The operator optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference
optimizes the
input model DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle
for inference on the device
DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle
and returns the
optimized model in DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted
.
This operator has two distinct functionalities:
Casting the model precision to PrecisionPrecisionPrecisionPrecisionprecisionprecision
and calibrating the
model based on the given samples DLSamplesDLSamplesDLSamplesDLSamplesDLSamplesdlsamples
.
Additionally in either case the model architecture may be optimized
for the DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle
.
The parameter DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle
specifies the deep learning device
for which the model is optimized.
Whether the device supports optimization can be determined using
get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamGetDlDeviceParamget_dl_device_param
with 'conversion_supported'"conversion_supported""conversion_supported""conversion_supported""conversion_supported""conversion_supported".
After a successful execution, optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference
sets the parameter 'precision_is_converted'"precision_is_converted""precision_is_converted""precision_is_converted""precision_is_converted""precision_is_converted" to 'true'"true""true""true""true""true"
for the output model DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted
.
In addition, the device in DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle
is automatically
set for the model if it supports the precision set by the parameter
PrecisionPrecisionPrecisionPrecisionprecisionprecision
. Whether the device supports the requested precision can
be determined using get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamGetDlDeviceParamget_dl_device_param
with 'precisions'"precisions""precisions""precisions""precisions""precisions".
The parameter PrecisionPrecisionPrecisionPrecisionprecisionprecision
specifies the precision to which the model
should be converted to. By default, models that are delivered by HALCON have
the PrecisionPrecisionPrecisionPrecisionprecisionprecision
'float32'"float32""float32""float32""float32""float32".
The following values are supported for PrecisionPrecisionPrecisionPrecisionprecisionprecision
:
-
'float32'"float32""float32""float32""float32""float32"
-
'float16'"float16""float16""float16""float16""float16"
-
'int8'"int8""int8""int8""int8""int8"
The parameter DLSamplesDLSamplesDLSamplesDLSamplesDLSamplesdlsamples
specifies the samples on
which the calibration is based.
As a consequence they should be representative.
It is recommended to provide them from the training split.
For most applications 10-20 samples per class are sufficient
to achieve good results.
Note, the samples are not needed for a pure cast operation.
In this case, an empty tuple can be passed over for DLSamplesDLSamplesDLSamplesDLSamplesDLSamplesdlsamples
.
The parameter GenParamGenParamGenParamGenParamgenParamgen_param
specifies additional, device specific
parameters and their values.
Which parameters to set for the given DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle
in GenParamGenParamGenParamGenParamgenParamgen_param
and their default values can be queried
via the get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamGetDlDeviceParamget_dl_device_param
operator with the
'optimize_for_inference_params'"optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params" parameter.
Note, certain devices also expect only an empty dictionary.
The parameter ConversionReportConversionReportConversionReportConversionReportconversionReportconversion_report
returns a report dictionary
with information about the conversion.
Attention
This operator can only be used via an AI
2-interface.
Furthermore, after optimization only parameters that do not change the
underlying architecture of the model can be set for
DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted
.
For set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param
, this includes the following parameters:
-
'Any': 'device'"device""device""device""device""device", 'runtime'"runtime""runtime""runtime""runtime""runtime"
-
'anomaly_detection': 'standard_deviation_factor'"standard_deviation_factor""standard_deviation_factor""standard_deviation_factor""standard_deviation_factor""standard_deviation_factor"
-
'classification': 'class_names'"class_names""class_names""class_names""class_names""class_names"
-
'ocr_detection': 'min_character_score'"min_character_score""min_character_score""min_character_score""min_character_score""min_character_score", 'min_link_score'"min_link_score""min_link_score""min_link_score""min_link_score""min_link_score",
'min_word_score'"min_word_score""min_word_score""min_word_score""min_word_score""min_word_score", 'orientation'"orientation""orientation""orientation""orientation""orientation",
'sort_by_line'"sort_by_line""sort_by_line""sort_by_line""sort_by_line""sort_by_line", 'tiling'"tiling""tiling""tiling""tiling""tiling",
'tiling_overlap'"tiling_overlap""tiling_overlap""tiling_overlap""tiling_overlap""tiling_overlap"
-
'ocr_recognition': 'alphabet'"alphabet""alphabet""alphabet""alphabet""alphabet", 'alphabet_internal'"alphabet_internal""alphabet_internal""alphabet_internal""alphabet_internal""alphabet_internal",
'alphabet_mapping'"alphabet_mapping""alphabet_mapping""alphabet_mapping""alphabet_mapping""alphabet_mapping"
-
'gc_anomaly_detection': 'anomaly_score_tolerance'"anomaly_score_tolerance""anomaly_score_tolerance""anomaly_score_tolerance""anomaly_score_tolerance""anomaly_score_tolerance"
-
'detection': 'class_names'"class_names""class_names""class_names""class_names""class_names", 'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections""max_num_detections",
'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap""max_overlap", 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic",
'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence""min_confidence"
-
'segmentation': 'class_names'"class_names""class_names""class_names""class_names""class_names"
For set_deep_ocr_paramset_deep_ocr_paramSetDeepOcrParamSetDeepOcrParamSetDeepOcrParamset_deep_ocr_param
, this includes the following parameters:
-
'device'"device""device""device""device""device", 'runtime'"runtime""runtime""runtime""runtime""runtime"
-
'detection_min_character_score'"detection_min_character_score""detection_min_character_score""detection_min_character_score""detection_min_character_score""detection_min_character_score", 'detection_min_link_score'"detection_min_link_score""detection_min_link_score""detection_min_link_score""detection_min_link_score""detection_min_link_score", 'detection_min_word_score'"detection_min_word_score""detection_min_word_score""detection_min_word_score""detection_min_word_score""detection_min_word_score",
-
'detection_orientation'"detection_orientation""detection_orientation""detection_orientation""detection_orientation""detection_orientation", 'detection_sort_by_line'"detection_sort_by_line""detection_sort_by_line""detection_sort_by_line""detection_sort_by_line""detection_sort_by_line",
-
'detection_tiling'"detection_tiling""detection_tiling""detection_tiling""detection_tiling""detection_tiling", 'detection_tiling_overlap'"detection_tiling_overlap""detection_tiling_overlap""detection_tiling_overlap""detection_tiling_overlap""detection_tiling_overlap"
-
'recognition_alphabet'"recognition_alphabet""recognition_alphabet""recognition_alphabet""recognition_alphabet""recognition_alphabet", 'recognition_alphabet_internal'"recognition_alphabet_internal""recognition_alphabet_internal""recognition_alphabet_internal""recognition_alphabet_internal""recognition_alphabet_internal", 'recognition_alphabet_mapping'"recognition_alphabet_mapping""recognition_alphabet_mapping""recognition_alphabet_mapping""recognition_alphabet_mapping""recognition_alphabet_mapping"
For set_deep_counting_model_paramset_deep_counting_model_paramSetDeepCountingModelParamSetDeepCountingModelParamSetDeepCountingModelParamset_deep_counting_model_param
, this includes the following parameters:
-
'device'"device""device""device""device""device"
-
'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap""max_overlap", 'min_score'"min_score""min_score""min_score""min_score""min_score"
Only the AI
2-interface that was used to optimize can be set using
'device'"device""device""device""device""device" or the 'runtime'"runtime""runtime""runtime""runtime""runtime".
Additional restrictions may apply to these parameters to ensure that the
underlying architecture of the model does not change.
Execution Information
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
Parameters
DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle
(input_control) dl_model →
HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)
Input model.
DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle
(input_control) dl_device(-array) →
HDlDevice, HTupleMaybeSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)
Device handle used for optimization.
PrecisionPrecisionPrecisionPrecisionprecisionprecision
(input_control) string →
HTuplestrHTupleHtuple (string) (string) (HString) (char*)
Precision the model shall be converted to.
DLSamplesDLSamplesDLSamplesDLSamplesDLSamplesdlsamples
(input_control) dict-array →
HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)
Samples required for optimization.
GenParamGenParamGenParamGenParamgenParamgen_param
(input_control) dict →
HDict, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)
Parameter dict for optimization.
DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted
(output_control) dl_model →
HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)
Output model with new precision.
ConversionReportConversionReportConversionReportConversionReportconversionReportconversion_report
(output_control) dict →
HDict, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)
Output report for conversion.
Result
If the parameters are valid, the operator optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference
returns the value 2 (
H_MSG_TRUE)
. If necessary, an exception is raised.
Possible Predecessors
train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch
,
query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices
Possible Successors
set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param
,
apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model
Module
Foundation. This operator uses dynamic licensing (see the ``Installation Guide''). Which of the following modules is required depends on the specific usage of the operator:
3D Metrology, OCR/OCV, Matching, Deep Learning Inference