optimize_dl_model_for_inferenceT_optimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference (Operator)

Name

optimize_dl_model_for_inferenceT_optimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference — Optimize a model for inference on a device via the AI 2-interface.

Signature

optimize_dl_model_for_inference( : : DLModelHandle, DLDeviceHandle, Precision, DLSamples, GenParam : DLModelHandleConverted, ConversionReport)

Herror T_optimize_dl_model_for_inference(const Htuple DLModelHandle, const Htuple DLDeviceHandle, const Htuple Precision, const Htuple DLSamples, const Htuple GenParam, Htuple* DLModelHandleConverted, Htuple* ConversionReport)

void OptimizeDlModelForInference(const HTuple& DLModelHandle, const HTuple& DLDeviceHandle, const HTuple& Precision, const HTuple& DLSamples, const HTuple& GenParam, HTuple* DLModelHandleConverted, HTuple* ConversionReport)

HDlModel HDlModel::OptimizeDlModelForInference(const HDlDeviceArray& DLDeviceHandle, const HString& Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const

HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const HString& Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const

HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const char* Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const

HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const wchar_t* Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const   ( Windows only)

static void HOperatorSet.OptimizeDlModelForInference(HTuple DLModelHandle, HTuple DLDeviceHandle, HTuple precision, HTuple DLSamples, HTuple genParam, out HTuple DLModelHandleConverted, out HTuple conversionReport)

HDlModel HDlModel.OptimizeDlModelForInference(HDlDevice[] DLDeviceHandle, string precision, HDict[] DLSamples, HDict genParam, out HDict conversionReport)

HDlModel HDlModel.OptimizeDlModelForInference(HDlDevice DLDeviceHandle, string precision, HDict[] DLSamples, HDict genParam, out HDict conversionReport)

def optimize_dl_model_for_inference(dlmodel_handle: HHandle, dldevice_handle: MaybeSequence[HHandle], precision: str, dlsamples: Sequence[HHandle], gen_param: HHandle) -> Tuple[HHandle, HHandle]

Description

The operator optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference optimizes the input model DLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle for inference on the device DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle and returns the optimized model in DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted. This operator has two distinct functionalities: Casting the model precision to PrecisionPrecisionPrecisionprecisionprecision and calibrating the model based on the given samples DLSamplesDLSamplesDLSamplesDLSamplesdlsamples. Additionally in either case the model architecture may be optimized for the DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle.

The parameter DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle specifies the deep learning device for which the model is optimized. Whether the device supports optimization can be determined using get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamget_dl_device_param with 'conversion_supported'"conversion_supported""conversion_supported""conversion_supported""conversion_supported". After a successful execution, optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference sets the parameter 'precision_is_converted'"precision_is_converted""precision_is_converted""precision_is_converted""precision_is_converted" to 'true'"true""true""true""true" for the output model DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted. In addition, the device in DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle is automatically set for the model if it supports the precision set by the parameter PrecisionPrecisionPrecisionprecisionprecision. Whether the device supports the requested precision can be determined using get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamget_dl_device_param with 'precisions'"precisions""precisions""precisions""precisions".

The parameter PrecisionPrecisionPrecisionprecisionprecision specifies the precision to which the model should be converted to. By default, models that are delivered by HALCON have the PrecisionPrecisionPrecisionprecisionprecision 'float32'"float32""float32""float32""float32". The following values are supported for PrecisionPrecisionPrecisionprecisionprecision:

The parameter DLSamplesDLSamplesDLSamplesDLSamplesdlsamples specifies the samples on which the calibration is based. As a consequence they should be representative. It is recommended to provide them from the training split. For most applications 10-20 samples per class are sufficient to achieve good results.

Note, the samples are not needed for a pure cast operation. In this case, an empty tuple can be passed over for DLSamplesDLSamplesDLSamplesDLSamplesdlsamples.

The parameter GenParamGenParamGenParamgenParamgen_param specifies additional, device specific parameters and their values. Which parameters to set for the given DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle in GenParamGenParamGenParamgenParamgen_param and their default values can be queried via the get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamget_dl_device_param operator with the 'optimize_for_inference_params'"optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params" parameter.

Note, certain devices also expect only an empty dictionary.

The parameter ConversionReportConversionReportConversionReportconversionReportconversion_report returns a report dictionary with information about the conversion.

Attention

This operator can only be used via an AI 2-interface. Furthermore, after optimization only parameters that do not change the underlying architecture of the model can be set for DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted.

For set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamset_dl_model_param, this includes the following parameters:

For set_deep_ocr_paramset_deep_ocr_paramSetDeepOcrParamSetDeepOcrParamset_deep_ocr_param, this includes the following parameters:

For set_deep_counting_model_paramset_deep_counting_model_paramSetDeepCountingModelParamSetDeepCountingModelParamset_deep_counting_model_param, this includes the following parameters:

Only the AI 2-interface that was used to optimize can be set using 'device'"device""device""device""device" or the 'runtime'"runtime""runtime""runtime""runtime". Additional restrictions may apply to these parameters to ensure that the underlying architecture of the model does not change.

Execution Information

Parameters

DLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle (input_control)  dl_model HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Input model.

DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle (input_control)  dl_device(-array) HDlDevice, HTupleMaybeSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Device handle used for optimization.

PrecisionPrecisionPrecisionprecisionprecision (input_control)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Precision the model shall be converted to.

DLSamplesDLSamplesDLSamplesDLSamplesdlsamples (input_control)  dict-array HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Samples required for optimization.

GenParamGenParamGenParamgenParamgen_param (input_control)  dict HDict, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Parameter dict for optimization.

DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted (output_control)  dl_model HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Output model with new precision.

ConversionReportConversionReportConversionReportconversionReportconversion_report (output_control)  dict HDict, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Output report for conversion.

Result

If the parameters are valid, the operator optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference returns the value 2 ( H_MSG_TRUE) . If necessary, an exception is raised.

Possible Predecessors

train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch, query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices

Possible Successors

set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamset_dl_model_param, apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelapply_dl_model

Module

Foundation. This operator uses dynamic licensing (see the ``Installation Guide''). Which of the following modules is required depends on the specific usage of the operator:
3D Metrology, OCR/OCV, Matching, Deep Learning Inference