clear_samples_class_mlpT_clear_samples_class_mlpClearSamplesClassMlpClearSamplesClassMlp (Operator)
Name
clear_samples_class_mlpT_clear_samples_class_mlpClearSamplesClassMlpClearSamplesClassMlp
— Clear the training data of a multilayer perceptron.
Signature
Herror T_clear_samples_class_mlp(const Htuple MLPHandle)
Description
clear_samples_class_mlpclear_samples_class_mlpClearSamplesClassMlpClearSamplesClassMlpClearSamplesClassMlp
clears all training samples that
have been added to the multilayer perceptron (MLP)
MLPHandleMLPHandleMLPHandleMLPHandleMLPHandle
with add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlpAddSampleClassMlp
or
read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlpReadSamplesClassMlp
. clear_samples_class_mlpclear_samples_class_mlpClearSamplesClassMlpClearSamplesClassMlpClearSamplesClassMlp
should only be used if the MLP is trained in the same process that
uses the MLP for evaluation with evaluate_class_mlpevaluate_class_mlpEvaluateClassMlpEvaluateClassMlpEvaluateClassMlp
or for
classification with classify_class_mlpclassify_class_mlpClassifyClassMlpClassifyClassMlpClassifyClassMlp
. In this case, the
memory required for the training samples can be freed with
clear_samples_class_mlpclear_samples_class_mlpClearSamplesClassMlpClearSamplesClassMlpClearSamplesClassMlp
, and hence memory can be saved. In
the normal usage, in which the MLP is trained offline and written to
a file with write_class_mlpwrite_class_mlpWriteClassMlpWriteClassMlpWriteClassMlp
, it is typically unnecessary to
call clear_samples_class_mlpclear_samples_class_mlpClearSamplesClassMlpClearSamplesClassMlpClearSamplesClassMlp
because write_class_mlpwrite_class_mlpWriteClassMlpWriteClassMlpWriteClassMlp
does not save the training samples, and hence the online process,
which reads the MLP with read_class_mlpread_class_mlpReadClassMlpReadClassMlpReadClassMlp
, requires no memory
for the training samples.
Execution Information
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
This operator modifies the state of the following input parameter:
The value of this parameter may not be shared across multiple threads without external synchronization.
Parameters
MLPHandleMLPHandleMLPHandleMLPHandleMLPHandle
(input_control, state is modified) class_mlp(-array) →
HClassMlp, HTupleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)
MLP handle.
Result
If the parameters are valid, the operator
clear_samples_class_mlpclear_samples_class_mlpClearSamplesClassMlpClearSamplesClassMlpClearSamplesClassMlp
returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Possible Predecessors
train_class_mlptrain_class_mlpTrainClassMlpTrainClassMlpTrainClassMlp
,
write_samples_class_mlpwrite_samples_class_mlpWriteSamplesClassMlpWriteSamplesClassMlpWriteSamplesClassMlp
See also
create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpCreateClassMlp
,
clear_class_mlpclear_class_mlpClearClassMlpClearClassMlpClearClassMlp
,
add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlpAddSampleClassMlp
,
read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlpReadSamplesClassMlp
Module
Foundation