get_dl_model_paramT_get_dl_model_paramGetDlModelParamGetDlModelParam (Operator)

Name

get_dl_model_paramT_get_dl_model_paramGetDlModelParamGetDlModelParam — Return the parameters of a deep learning model.

Signature

get_dl_model_param( : : DLModelHandle, GenParamName : GenParamValue)

Herror T_get_dl_model_param(const Htuple DLModelHandle, const Htuple GenParamName, Htuple* GenParamValue)

void GetDlModelParam(const HTuple& DLModelHandle, const HTuple& GenParamName, HTuple* GenParamValue)

HTuple HDlModel::GetDlModelParam(const HString& GenParamName) const

HTuple HDlModel::GetDlModelParam(const char* GenParamName) const

HTuple HDlModel::GetDlModelParam(const wchar_t* GenParamName) const   (Windows only)

static void HOperatorSet.GetDlModelParam(HTuple DLModelHandle, HTuple genParamName, out HTuple genParamValue)

HTuple HDlModel.GetDlModelParam(string genParamName)

Description

get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam returns the parameter values GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue of GenParamNameGenParamNameGenParamNameGenParamNamegenParamName for the deep learning model DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandle.

For a deep learning model, parameters GenParamNameGenParamNameGenParamNameGenParamNamegenParamName can be set using set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam or create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection, depending on the parameter and the model type. With this operator, get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam, you can retrieve the parameter values GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue. Below we give an overview of the different parameters and an explanation, except of those you can only set. For latter ones, please see the documentation of corresponding operator.

GenParamNameGenParamNameGenParamNameGenParamNamegenParamName Object Detection Semantic Segmentation
create set get set get
'batch_size'"batch_size""batch_size""batch_size""batch_size" n y y y y
'class_ids'"class_ids""class_ids""class_ids""class_ids" y y y y y
'gpu'"gpu""gpu""gpu""gpu" n y y y y
'image_dimensions'"image_dimensions""image_dimensions""image_dimensions""image_dimensions" y n y y y
'image_height'"image_height""image_height""image_height""image_height" y n y y y
'image_width'"image_width""image_width""image_width""image_width" y n y y y
'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels" y n y y y
'image_range_max'"image_range_max""image_range_max""image_range_max""image_range_max" n n y y y
'image_range_min'"image_range_min""image_range_min""image_range_min""image_range_min" n n y y y
'learning_rate'"learning_rate""learning_rate""learning_rate""learning_rate" n y y y y
'momentum'"momentum""momentum""momentum""momentum" n y y y y
'num_classes'"num_classes""num_classes""num_classes""num_classes" (NumClassesNumClassesNumClassesNumClassesnumClasses) y n y n y
'runtime'"runtime""runtime""runtime""runtime" n y y y y
'runtime_init'"runtime_init""runtime_init""runtime_init""runtime_init" n y n y n
'type'"type""type""type""type" n n y n y
'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" n y y y y
'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios" y n y - -
'backbone'"backbone""backbone""backbone""backbone" (BackboneBackboneBackboneBackbonebackbone) y n y - -
'capacity'"capacity""capacity""capacity""capacity" y n y - -
'class_weights'"class_weights""class_weights""class_weights""class_weights" y n y - -
'max_level'"max_level""max_level""max_level""max_level" y n y - -
'min_level'"min_level""min_level""min_level""min_level" y n y - -
'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections" y y y - -
'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap" y y y - -
'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic" y y y - -
'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence" y y y - -
'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales" y n y - -
'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids" - - - y y

Thereby, 'set' denotes set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam, 'get' get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam, and 'create' create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection. We note 'y' if the operator can be used for this parameter and model, 'n' if not, and '-' if the parameter is not applicable for this type of model. Certain parameters are set as non-optional parameters, the corresponding notation is given in brackets.

In the following we list and explain the parameters GenParamNameGenParamNameGenParamNameGenParamNamegenParamName for which you can retrieve their value using this operator, get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam. They are sorted according to the model type. Note, for models of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation" the default values depend on the specific network and therefore have to be retrieved.

Any Model
'batch_size'"batch_size""batch_size""batch_size""batch_size"

Number of input images in a batch and thus the number of images that are processed simultaneously in a single iteration of the training and inference, respectively. Please refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch for further details.

'class_ids'"class_ids""class_ids""class_ids""class_ids":

Unique IDs of the classes the model shall distinguish. Thereby, you can set any integer within the interval as class id value. The tuple is of length 'num_classes'"num_classes""num_classes""num_classes""num_classes".

We stress out the slightly different meanings and restrictions depending on the model type:

Models of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection":

Only the classes of the objects to be detected are included and therewith no background class.

Default: 'class_ids'"class_ids""class_ids""class_ids""class_ids" = '[0,...,num_classes-1]'"[0,...,num_classes-1]""[0,...,num_classes-1]""[0,...,num_classes-1]""[0,...,num_classes-1]"

Models of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation":

Every class used for training has to be included and therewith also the class ID of the 'background' class. Therefore, for such a model the tuple has a minimal length of 2.

'gpu'"gpu""gpu""gpu""gpu":

Identifier of the GPU where the training and inference operators (train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch and apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel) are executed. Per default, the first available GPU is used. get_systemget_systemGetSystemGetSystemGetSystem with 'cuda_devices'"cuda_devices""cuda_devices""cuda_devices""cuda_devices" can be used to retrieve a list of available GPUs. Pass the index in this list to 'gpu'"gpu""gpu""gpu""gpu".

Default: 'gpu'"gpu""gpu""gpu""gpu" = '0'"0""0""0""0"

'image_dimension'"image_dimension""image_dimension""image_dimension""image_dimension":

Tuple containing the input image dimensions 'image_width'"image_width""image_width""image_width""image_width", 'image_height'"image_height""image_height""image_height""image_height", and number of channels 'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels".

The respective default values and possible value ranges depend on the model and model type. Please see the individual dimension parameter description for more details.

'image_height'"image_height""image_height""image_height""image_height", 'image_width'"image_width""image_width""image_width""image_width":

Height and width of the input images, respectively, that the network will process.

This parameter can attain different values depending on the model type:

Models of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection":

The network architectures allow changes of the image dimensions. But the image lengths are halved for every level, thats why the dimensions 'image_width'"image_width""image_width""image_width""image_width" and 'image_height'"image_height""image_height""image_height""image_height" need to be an integer multiple of . depends on the 'backbone'"backbone""backbone""backbone""backbone" and the parameter 'max_level'"max_level""max_level""max_level""max_level", see create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection for further information.

Default: 'image_height'"image_height""image_height""image_height""image_height" = '640'"640""640""640""640", 'image_width'"image_width""image_width""image_width""image_width" = '640'"640""640""640""640"

Models of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation":

The network architectures allow changes of the image dimensions.

The default and minimal values are given by the network, see read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModel.

'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels":

Number of channels of the input images the network will process. The default value is given by the network, see read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModel and create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection.

Any number of input image channels is possible.

If number of channels is changed to a value >1, the weights of the first layers after the input image layer will be initialized with random values. Note, in this case more data for the retraining is needed. If the number of channels is changed to 1, the weights of the concerned layers are fused.

Models of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection":

Default: 'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels" = '3'"3""3""3""3"

'image_range_max'"image_range_max""image_range_max""image_range_max""image_range_max", 'image_range_min'"image_range_min""image_range_min""image_range_min""image_range_min":

Maximum and minimum gray value of the input images, respectively, the network will process.

The default values are given by the network, see read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModel and create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection.

'learning_rate'"learning_rate""learning_rate""learning_rate""learning_rate":

Value of the factor determining the gradient influence during training. Please refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch for further details. The default values depend on the model.

'momentum'"momentum""momentum""momentum""momentum":

When updating the weights of the network, the hyperparameter 'momentum'"momentum""momentum""momentum""momentum" specifies to which extent previous updating vectors will be added to the current updating vector. Please refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch for further details. The default value is given by the model.

'num_classes'"num_classes""num_classes""num_classes""num_classes":

Number of distinct classes that the model is able to distinguish for its predictions.

This parameter differs between the models. For a model of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection" the 'background' class is not included, as background is not predicted by a detector. Also, this parameter is set as NumClassesNumClassesNumClassesNumClassesnumClasses over create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection and 'class_ids'"class_ids""class_ids""class_ids""class_ids" always needs to have a number of entries equal 'num_classes'"num_classes""num_classes""num_classes""num_classes". But a model of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation" does predict background and therefore in this case the 'background' class is included in 'num_classes'"num_classes""num_classes""num_classes""num_classes". For these models, 'num_classes'"num_classes""num_classes""num_classes""num_classes" is determined implicitly by the length of 'class_ids'"class_ids""class_ids""class_ids""class_ids".

'runtime'"runtime""runtime""runtime""runtime":

Defines the device on which the operators will be executed. Default: 'runtime'"runtime""runtime""runtime""runtime" = 'gpu'"gpu""gpu""gpu""gpu"

'cpu'"cpu""cpu""cpu""cpu":

The operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel will be executed on the CPU, whereas the operator train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch is not executable.

In case the GPU has been used before, CPU memory is initialized, and if necessary values stored on the GPU memory are moved to the CPU memory.

The 'cpu'"cpu""cpu""cpu""cpu" runtime uses OpenMP for the parallelization of apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel. Per default, all threads available to the OpenMP runtime are used. Use the thread specific set_systemset_systemSetSystemSetSystemSetSystem parameter 'tsp_thread_num'"tsp_thread_num""tsp_thread_num""tsp_thread_num""tsp_thread_num" to specify the number of threads to use.

'gpu'"gpu""gpu""gpu""gpu":

The GPU memory is initialized. The operators apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel and train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch will be executed on the GPU. For the specific requirements please refer to the HALCON “Installation Guide”.

'type'"type""type""type""type":

This parameter returns the model type. The following types are distinguished: 'detection'"detection""detection""detection""detection" and 'segmentation'"segmentation""segmentation""segmentation""segmentation".

'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior":

Regularization parameter used for the regularization of the loss function. For a detailed description of the regularization term we refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch. Simply put: Regularization favors simpler models that are less likely to learn noise in the data and generalize better. Per default no regularization is used, i.e. 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" is set to 0.0. In case the classifier overfits the data, it is strongly recommended to try different values for the parameter 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" to improve the generalization properties of the neural network. Choosing its value is a trade-off between the models ability to generalize, overfitting, and underfitting. If is too small, the model might overfit, if its too large the model might loose its ability to fit the data, because all weights are effectively zero. For finding an ideal value for , we recommend a cross-validation, i.e. to perform the training for a range of values and choose the value that results in the best validation error. For typical applications, we recommend testing the values for 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" on a logarithmic scale between . If the training takes a very long time, one might consider performing the hyperparameter optimization on a reduced amount of data.

Default: 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" = '0.0'"0.0""0.0""0.0""0.0"

Models of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection"
'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios":

The parameter 'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios" determines the aspect ratio height to width of the reference bounding boxes. E.g., the ratio 2 gives a narrow and 0.5 a broad initial reference box. The size of the reference bounding box is affected by the parameter 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales" and with its explanation we give the formula for the sizes and lengths of the generated reference bounding boxes. See the chapter Deep Learning / Object Detection for more explanations to bounding boxes.

Default: 'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios" = '[1.0, 2.0, 0.5]'"[1.0, 2.0, 0.5]""[1.0, 2.0, 0.5]""[1.0, 2.0, 0.5]""[1.0, 2.0, 0.5]"

You can set a tuple of values. A higher number of aspect ratios increases the number of proposed reference bounding boxes which might lead to a better localization but also increases the runtime and memory-consumption.

'backbone'"backbone""backbone""backbone""backbone":

The parameter 'backbone'"backbone""backbone""backbone""backbone" is the name (together with the path) of the backbone network which is used to create the model. A list of the delivered backbone networks can be found under create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection.

'capacity'"capacity""capacity""capacity""capacity":

This parameter roughly determines the number of parameters (or filter weights) in the deeper sections of the object detection network (after the backbone). Its possible values are 'high'"high""high""high""high", 'medium'"medium""medium""medium""medium", and 'low'"low""low""low""low".

It can be used to trade off between detection performance and speed. For simpler object detection tasks, the 'low'"low""low""low""low" or 'medium'"medium""medium""medium""medium" settings may be sufficient to achieve the same detection performance as with 'high'"high""high""high""high".

Default: 'capacity'"capacity""capacity""capacity""capacity" = 'high'"high""high""high""high"

'class_weights'"class_weights""class_weights""class_weights""class_weights":

The parameter 'class_weights'"class_weights""class_weights""class_weights""class_weights" is a tuple of class specific weighting factors for the loss. Thereby, these factors are sorted according to the classes in the tuple 'class_ids'"class_ids""class_ids""class_ids""class_ids". One exception is the case where all classes have the same value 'class_weights'"class_weights""class_weights""class_weights""class_weights", in this case you will get the value as a single number.

Default: 'class_weights'"class_weights""class_weights""class_weights""class_weights" = '0.25'"0.25""0.25""0.25""0.25" (for each class).

'max_level'"max_level""max_level""max_level""max_level", 'min_level'"min_level""min_level""min_level""min_level":

These parameters determine on which levels the additional networks are attached on the feature pyramid. We refer to the chapter Deep Learning / Object Detection for further explanations to the feature pyramid and the attached networks.

From these ('max_level'"max_level""max_level""max_level""max_level" - 'min_level'"min_level""min_level""min_level""min_level" + 1) networks all predictions with a minimum confidence value are kept as long they do not strongly overlap (see 'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence" and 'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap").

The level declares how often the size of the feature map already has been scaled down. Thus, level 0 corresponds to the feature maps with size of the input image, level 1 to feature maps subscaled once, and so on. As a consequence, smaller objects are detected in the lower levels, whereas larger objects are detected in higher levels.

The value for 'min_level'"min_level""min_level""min_level""min_level" needs to be at least 2.

If 'max_level'"max_level""max_level""max_level""max_level" is larger than the number of levels the backbone can provide, the backbone is extended with additional (randomly initialized) convolutional layers in order to generate deeper levels. Further, 'max_level'"max_level""max_level""max_level""max_level" may have an influence on the minimal input image size.

Note, for small input image dimensions, high levels might not be meaningful, as the feature maps could already be too small to contain meaningful information.

A higher number of used levels might increases the runtime and memory-consumption, whereby especially lower levels carry weight.

Default: 'max_level'"max_level""max_level""max_level""max_level" = '6'"6""6""6""6", 'min_level'"min_level""min_level""min_level""min_level" = '2'"2""2""2""2"

'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections":

This parameter determines the maximum number of detections (bounding boxes) per image proposed from the network.

Default: 'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections" = '100'"100""100""100""100"

'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap":

The maximum allowed intersection over union (IoU) for two predicted bounding boxes of the same class. Or, vice-versa, when two bounding boxes are classified into the same class and have an IoU higher than 'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap", the one with lower confidence value gets suppressed. We refer to the chapter Deep Learning / Object Detection for further explanations to the IoU.

Default: 'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap" = '0.5'"0.5""0.5""0.5""0.5"

'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic":

The maximum allowed intersection over union (IoU) for two predicted bounding boxes independently of their predicted classes. Or, vice-versa, when two bounding boxes have an IoU higher than 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic", the one with lower confidence value gets suppressed. As default, 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic" is set to 1.0, hence class agnostic bounding box suppression has no influence.

Default: 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic" = '1.0'"1.0""1.0""1.0""1.0"

'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence":

This parameter determines the minimum confidence, when the image part within the bounding box is classified in order to keep the proposed bounding box. This means, when apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel is called, all output bounding boxes with a confidence value smaller than 'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence" are suppressed.

Default: 'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence" = '0.5'"0.5""0.5""0.5""0.5"

'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales":

This parameter determines the number of reference bounding boxes which are generated at different scales for each given aspect ratio.

In HALCON for every pixel of every feature map of the feature pyramid a reference bounding box is proposed. Thereby the parameter 'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios" affects the form and 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales" affects the size. In this way 'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios" times 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales" reference bounding boxes are generated for every pixel mentioned before. See the chapter Deep Learning / Object Detection for more explanations to bounding boxes. An example is shown in the figure below.

image/svg+xml
With 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales"=2 we generate for every aspect ratio 2 reference bounding boxes of different size on each level: One with the base length (solid line) and an additional, larger one (dotted line). Thereby, in the image these additional reference bounding boxes of the lower level (orange) converge to the reference bounding box of the next higher level (blue).

A reference bounding box of level has by default a size of in the input image, whereby the paramter has the value . With the parameter 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales" additional reference bounding boxes can be generated, which converge in size to the smallest reference bounding box of the level . More precisely, these reference bounding boxes of level have in the input image the size where . For subscale , this results on level in a reference bounding box of height and width equal where is the ratio of this reference bounding box (see 'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios").

A larger number of subscales increases the number of reference bounding boxes and will therefore increase the runtime and memory-consumption.

Default: 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales" = '3'"3""3""3""3"

Models of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation"
'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids":

With this parameter you can declare one or multiple classes as 'ignore' classes, see the chapter Deep Learning / Semantic Segmentation for further information. These classes are declared over their ID (integers).

Note, you can not set a class ID in 'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids" and 'class_ids'"class_ids""class_ids""class_ids""class_ids" simultaneously.

Execution Information

Parameters

DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandle (input_control)  dl_model HDlModel, HTupleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the deep learning model.

GenParamNameGenParamNameGenParamNameGenParamNamegenParamName (input_control)  attribute.name HTupleHTupleHtuple (string) (string) (HString) (char*)

Name of the generic parameter.

Default value: 'batch_size' "batch_size" "batch_size" "batch_size" "batch_size"

List of values: 'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios", 'backbone'"backbone""backbone""backbone""backbone", 'batch_size'"batch_size""batch_size""batch_size""batch_size", 'capacity'"capacity""capacity""capacity""capacity", 'class_ids'"class_ids""class_ids""class_ids""class_ids", 'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids", 'image_dimensions'"image_dimensions""image_dimensions""image_dimensions""image_dimensions", 'image_height'"image_height""image_height""image_height""image_height", 'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels", 'image_range_max'"image_range_max""image_range_max""image_range_max""image_range_max", 'image_range_min'"image_range_min""image_range_min""image_range_min""image_range_min", 'image_width'"image_width""image_width""image_width""image_width", 'learning_rate'"learning_rate""learning_rate""learning_rate""learning_rate", 'max_level'"max_level""max_level""max_level""max_level", 'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections", 'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap", 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic", 'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence", 'min_level'"min_level""min_level""min_level""min_level", 'momentum'"momentum""momentum""momentum""momentum", 'num_classes'"num_classes""num_classes""num_classes""num_classes", 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales", 'runtime'"runtime""runtime""runtime""runtime", 'runtime_init'"runtime_init""runtime_init""runtime_init""runtime_init", 'type'"type""type""type""type", 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior"

GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue (output_control)  attribute.name(-array) HTupleHTupleHtuple (integer / string / real) (int / long / string / double) (Hlong / HString / double) (Hlong / char* / double)

Value of the generic parameter.

Result

If the parameters are valid, the operator get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.

Possible Predecessors

read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModel, set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam

Possible Successors

set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam, apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel, train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch

See also

set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam

Module

Deep Learning Inference