Number of images (and corresponding
labels) in a batch and thus the number of images that are processed
simultaneously in a single iteration of the training. Please refer
to train_dl_classifier_batchtrain_dl_classifier_batchTrainDlClassifierBatchTrainDlClassifierBatchTrainDlClassifierBatch for further details.
This parameter is stored in the pretrained classifier. Per default,
the 'batch_size'"batch_size""batch_size""batch_size""batch_size" is set such that a training of the pretrained
classifier with up to 100 classes can be easily performed on a
device with 8 gigabyte of memory.
For the pretrained classifiers, the default values are hence given as
follows:
pretrained classifier
default value of
'batch_size'"batch_size""batch_size""batch_size""batch_size"
Here the index k runs over all weights of the network, except for the
biases which are not regularized. The regularization term
generally penalizes large weights, thus
pushing the weights towards zero, which effectively reduces the complexity
of the model.
Simply put: Regularization favors simpler models that are less likely to
learn noise in the data and generalize better. Per default no
regularization is used, i.e. 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" is set to
0.0. In case the classifier overfits the data, it is strongly
recommended to try different values for the parameter
'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" to improve the generalization properties of the
neural network. Choosing its value is a trade-off between the models
ability to generalize, overfitting, and underfitting.
If is too small, the model might overfit, if its too
large the model might loose its ability to fit the data, because all
weights are effectively zero. For finding an ideal value for
, we recommend a cross-validation, i.e. to perform the
training for a range of values and choose the value that results in the
best validation error. For typical applications, we recommend testing the
values for 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" on a logarithmic scale between
. If the training takes a very
long time, one might consider performing the hyperparameter optimization
on a reduced amount of data.
For an explanation of the concept of deep-learning-based classification
see the introduction of chapter Deep Learning / Classification.
To run this operator, cuDNN is required when setting 'gpu'"gpu""gpu""gpu""gpu"
or 'runtime_init'"runtime_init""runtime_init""runtime_init""runtime_init". For further details, please refer to the
Installation Guide, paragraph Requirements for Deep Learning.
List of values: 'batch_size'"batch_size""batch_size""batch_size""batch_size", 'classes'"classes""classes""classes""classes", 'gpu'"gpu""gpu""gpu""gpu", 'learning_rate'"learning_rate""learning_rate""learning_rate""learning_rate", 'momentum'"momentum""momentum""momentum""momentum", 'runtime_init'"runtime_init""runtime_init""runtime_init""runtime_init", 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior"
If the parameters are valid, the operator
set_dl_classifier_paramset_dl_classifier_paramSetDlClassifierParamSetDlClassifierParamSetDlClassifierParam returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.