create_dl_model_detectionT_create_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectioncreate_dl_model_detection — Create a deep learning network for object detection or instance segmentation.
You can specify your model and its architecture over the parameters listed
below.
To successfully create a detection model, you need to specify its backbone
and the number of classes the model shall be able to distinguish.
The first information is handed over through the parameter
BackboneBackboneBackboneBackbonebackbonebackbone which is explained below in the section
“Possible Backbones”.
The second information is given through the parameter NumClassesNumClassesNumClassesNumClassesnumClassesnum_classes.
Note, this parameter fixes the number of classes the network will
distinguish and therewith also the number of entries in
'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids" and 'class_names'"class_names""class_names""class_names""class_names""class_names".
The values of all other applicable parameters can be specified using
the dictionary DLModelDetectionParamDLModelDetectionParamDLModelDetectionParamDLModelDetectionParamDLModelDetectionParamdlmodel_detection_param.
Such a parameter is e.g., the 'instance_type'"instance_type""instance_type""instance_type""instance_type""instance_type", determining which
kind of bounding boxes the model handles.
To create a deep learning network for instance segmentation the parameter
'instance_segmentation'"instance_segmentation""instance_segmentation""instance_segmentation""instance_segmentation""instance_segmentation" has to be set to 'true'"true""true""true""true""true".
The full list of parameters that can be set is given below in the section
“Settable Parameters”. Some parameters are only available for instance
segmentation.
In case a parameter is not specified, the default value is taken to create
the model.
Note, parameters influencing the network architecture will not be changeable
anymore once the network has been created.
All the other parameters can still be set or changed using the operator
set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param.
An overview, how parameters can be set is given in
get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param, where also a description of the specific
parameters is provided.
After creating the object detection model, the 'type'"type""type""type""type""type" will
automatically be set to 'detection'"detection""detection""detection""detection""detection".
Possible Backbones
The parameter BackboneBackboneBackboneBackbonebackbonebackbone determines the backbone your
network will use. See the chapter Deep Learning / Object Detection and Instance Segmentation
for more information to the backbone. In short, the backbone consists of a
pretrained classifier, from which only the layers necessary to generate
the feature maps are kept.
Hence, there are no fully connected layers anymore in the network.
This implies that you read in a classifier as feature extractor for
the subsequent detection network.
For this you can read in a classifier in the HALCON format or a model
in the ONNX format, see read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModelread_dl_model for more information.
create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetectioncreate_dl_model_detection attaches the feature pyramid on
different levels of the backbone.
More precisely, the backbone has for different levels a layer specified
as docking layer. When creating a detection
model, the feature pyramid is attached on the corresponding docking layer.
The pretrained classifiers provided by HALCON have already
specified docking layers. But when you use a self-provided classifier as
backbone, you have to specify them yourself.
You can set backbone_docking_layers as part of the classifier
using the operator set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param or the backbone as such
using this operator.
The docking layers are from different levels and therefore the feature
maps used in the feature pyramid are of different size.
More precisely, in the feature pyramid the feature map lengths are halved
with every level.
By implication, the input image lengths need to be halved for every level.
This means, the network architectures allow changes concerning the image
dimensions, but the dimensions 'image_width'"image_width""image_width""image_width""image_width""image_width" and
'image_height'"image_height""image_height""image_height""image_height""image_height"
need to be an integer multiple of
.
Here, is the highest level up to which the feature
pyramid is built.
This value depends on the attached networks as well as on the docking
layers. For the provided classifiers the list below mentions, up to which
levels the feature pyramid is built using default settings.
HALCON provides the following pretrained classifiers you can read in
as backbone:
This neural network is designed for simple classification tasks.
It is characterized by its convolution kernels in the first
convolution layers, which are larger than in other networks with
comparable classification performance
(e.g., 'pretrained_dl_classifier_compact.hdl'"pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl").
This may be beneficial for feature extraction.
This neural network has more hidden layers than
'pretrained_dl_classifier_compact.hdl'"pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl" and is therefore
assumed to be better suited for more complex tasks. But this comes
at the cost of being more time and memory demanding.
As the network 'pretrained_dl_classifier_enhanced.hdl'"pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl",
this network is suited for more complex tasks.
But its structure differs, bringing the advantage of making the
training more stable and being internally more robust. Compared to
the neural network 'pretrained_dl_classifier_resnet50.hdl'"pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl"
it is less complex and has faster inference times.
As the network 'pretrained_dl_classifier_enhanced.hdl'"pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl",
this network is suited for more complex tasks.
But its structure differs, bringing the advantage of making the
training more stable and being internally more robust.
To successfully set 'gpu'"gpu""gpu""gpu""gpu""gpu" parameters, cuDNN and cuBLAS are
required, i.e., to set the parameter GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name'runtime'"runtime""runtime""runtime""runtime""runtime" to 'gpu'"gpu""gpu""gpu""gpu""gpu".
For further details, please refer to the “Installation Guide”,
paragraph “Requirements for Deep Learning and Deep-Learning-Based Methods”.
Execution Information
Multithreading type: reentrant (runs in parallel with non-exclusive operators).
Multithreading scope: global (may be called from any thread).
Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.
List of values: 'pretrained_dl_classifier_alexnet.hdl'"pretrained_dl_classifier_alexnet.hdl""pretrained_dl_classifier_alexnet.hdl""pretrained_dl_classifier_alexnet.hdl""pretrained_dl_classifier_alexnet.hdl""pretrained_dl_classifier_alexnet.hdl", 'pretrained_dl_classifier_compact.hdl'"pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl", 'pretrained_dl_classifier_enhanced.hdl'"pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl", 'pretrained_dl_classifier_mobilenet_v2.hdl'"pretrained_dl_classifier_mobilenet_v2.hdl""pretrained_dl_classifier_mobilenet_v2.hdl""pretrained_dl_classifier_mobilenet_v2.hdl""pretrained_dl_classifier_mobilenet_v2.hdl""pretrained_dl_classifier_mobilenet_v2.hdl", 'pretrained_dl_classifier_resnet18.hdl'"pretrained_dl_classifier_resnet18.hdl""pretrained_dl_classifier_resnet18.hdl""pretrained_dl_classifier_resnet18.hdl""pretrained_dl_classifier_resnet18.hdl""pretrained_dl_classifier_resnet18.hdl", 'pretrained_dl_classifier_resnet50.hdl'"pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl"
If the parameters are valid, the operator create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetectioncreate_dl_model_detection
returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.