set_dl_classifier_paramT_set_dl_classifier_paramSetDlClassifierParamSetDlClassifierParamset_dl_classifier_param — Set the parameters of a deep-learning-based classifier.
The network architectures allow different image dimensions.
But for networks with at least one
fully connected layer such a change makes a retraining necessary.
Networks without fully connected layers are directly applicable to different
image sizes. However, images with a size differing from the size with which
the classifier has been trained, are likely to show a reduced classification
accuracy.
Number of images (and corresponding
labels) in a batch that is transferred to device memory. The batch of
images which are processed simultaneously in a single training iteration
contains a number of images which is equal to 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size"
times 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier". Please refer to
train_dl_classifier_batchtrain_dl_classifier_batchTrainDlClassifierBatchTrainDlClassifierBatchTrainDlClassifierBatchtrain_dl_classifier_batch for further details. The parameter
'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" is stored in the pretrained classifier. Per default,
the 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" is set such that a training of the pretrained
classifier with up to 100 classes can be easily performed on a
device with 8 gigabyte of memory.
For the pretrained classifiers, the default values are hence given as
follows:
pretrained classifier
default value of
'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size"
Multiplier for
'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" to enable training with larger numbers of images
in one step which would otherwise not be possible due to GPU memory
limitations. For detailed information see
train_dl_classifier_batchtrain_dl_classifier_batchTrainDlClassifierBatchTrainDlClassifierBatchTrainDlClassifierBatchtrain_dl_classifier_batch. This model parameter does not have
any impact during evaluation and inference. For the pretrained
classifiers, the default value of 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" is
set to 1.
Number of channels of the images the
network will process. Possible are one channel (gray value image),
or three channels (three-channel image).
The default value is given by the network, see
read_dl_classifierread_dl_classifierReadDlClassifierReadDlClassifierReadDlClassifierread_dl_classifier.
Changing to a single channel image modifies the network configuration.
This process removes the color information contained in certain layers and
is not invertible.
Tuple containing the image dimensions
'image_width'"image_width""image_width""image_width""image_width""image_width", 'image_height'"image_height""image_height""image_height""image_height""image_height", and number of channels
'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels""image_num_channels".
The default values are given by the network,
see read_dl_classifierread_dl_classifierReadDlClassifierReadDlClassifierReadDlClassifierread_dl_classifier.
Concerning the number of channels, the values one (gray value image),
or three (three-channel image) are possible.
Changing to a single channel image modifies the network configuration.
This process removes the color information contained in certain layers and
is not invertible.
Defines the device on which the operators will be executed.
Per default, the 'runtime'"runtime""runtime""runtime""runtime""runtime" is set to 'gpu'"gpu""gpu""gpu""gpu""gpu".
On Arm architectures the 'cpu'"cpu""cpu""cpu""cpu""cpu" runtime uses a global thread
pool. You may specify the number of threads with the
set_systemset_systemSetSystemSetSystemSetSystemset_system parameter 'thread_num'"thread_num""thread_num""thread_num""thread_num""thread_num". You cannot specify
a thread specific number of threads on Arm architectures.
Note, this parameter has no effect if running on CPUs,
thus if 'runtime'"runtime""runtime""runtime""runtime""runtime" is set to 'cpu'"cpu""cpu""cpu""cpu""cpu".
Regularization parameter used for regularization of the loss
function. Regularization is helpful in the presence of overfitting during
the classifier training. If the hyperparameter 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior""weight_prior" is
non-zero, the regularization term given below is added to the loss
function (see also train_dl_classifier_batchtrain_dl_classifier_batchTrainDlClassifierBatchTrainDlClassifierBatchTrainDlClassifierBatchtrain_dl_classifier_batch)
Here the index k runs over all weights of the network, except for the
biases which are not regularized. The regularization term
generally penalizes large weights, thus
pushing the weights towards zero, which effectively reduces the complexity
of the model.
Simply put: Regularization favors simpler models that are less likely to
learn noise in the data and generalize better.
In case the classifier overfits the data, it is strongly
recommended to try different values for the parameter
'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior""weight_prior" to improve the generalization properties of the
neural network. Choosing its value is a trade-off between the models
ability to generalize, overfitting, and underfitting.
If is too small, the model might overfit, if its too
large the model might loose its ability to fit the data, because all
weights are effectively zero. For finding an ideal value for
, we recommend a cross-validation, i.e. to perform the
training for a range of values and choose the value that results in the
best validation error. For typical applications, we recommend testing the
values for 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior""weight_prior" on a logarithmic scale between
. If the training takes a very
long time, one might consider performing the hyperparameter optimization
on a reduced amount of data.
The default value depends on the classifier.
For an explanation of the concept of deep-learning-based classification
see the introduction of chapter Deep Learning / Classification.
The workflow involving this legacy operator is described in the chapter
Legacy / DL Classification.
If the parameters are valid, the operator
set_dl_classifier_paramset_dl_classifier_paramSetDlClassifierParamSetDlClassifierParamSetDlClassifierParamset_dl_classifier_param returns the value TRUE. If
necessary, an exception is raised.