get_prep_info_class_mlpT_get_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp (Operator)

Name

get_prep_info_class_mlpT_get_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp — Compute the information content of the preprocessed feature vectors of a multilayer perceptron.

Signature

get_prep_info_class_mlp( : : MLPHandle, Preprocessing : InformationCont, CumInformationCont)

Herror T_get_prep_info_class_mlp(const Htuple MLPHandle, const Htuple Preprocessing, Htuple* InformationCont, Htuple* CumInformationCont)

void GetPrepInfoClassMlp(const HTuple& MLPHandle, const HTuple& Preprocessing, HTuple* InformationCont, HTuple* CumInformationCont)

HTuple HClassMlp::GetPrepInfoClassMlp(const HString& Preprocessing, HTuple* CumInformationCont) const

HTuple HClassMlp::GetPrepInfoClassMlp(const char* Preprocessing, HTuple* CumInformationCont) const

HTuple HClassMlp::GetPrepInfoClassMlp(const wchar_t* Preprocessing, HTuple* CumInformationCont) const   (Windows only)

static void HOperatorSet.GetPrepInfoClassMlp(HTuple MLPHandle, HTuple preprocessing, out HTuple informationCont, out HTuple cumInformationCont)

HTuple HClassMlp.GetPrepInfoClassMlp(string preprocessing, out HTuple cumInformationCont)

def get_prep_info_class_mlp(mlphandle: HHandle, preprocessing: str) -> Tuple[Sequence[float], Sequence[float]]

Description

get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp computes the information content of the training vectors that have been transformed with the preprocessing given by PreprocessingPreprocessingPreprocessingPreprocessingpreprocessingpreprocessing. PreprocessingPreprocessingPreprocessingPreprocessingpreprocessingpreprocessing can be set to 'principal_components'"principal_components""principal_components""principal_components""principal_components""principal_components" or 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates". The preprocessing methods are described with create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpCreateClassMlpcreate_class_mlp. The information content is derived from the variations of the transformed components of the feature vector, i.e., it is computed solely based on the training data, independent of any error rate on the training data. The information content is computed for all relevant components of the transformed feature vectors (NumInput for 'principal_components'"principal_components""principal_components""principal_components""principal_components""principal_components" and min(NumOutput - 1, NumInput) for 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates", see create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpCreateClassMlpcreate_class_mlp), and is returned in InformationContInformationContInformationContInformationContinformationContinformation_cont as a number between 0 and 1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n components is returned in the n-th component of CumInformationContCumInformationContCumInformationContCumInformationContcumInformationContcum_information_cont, i.e., CumInformationContCumInformationContCumInformationContCumInformationContcumInformationContcum_information_cont contains the sums of the first n elements of InformationContInformationContInformationContInformationContinformationContinformation_cont. To use get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp, a sufficient number of samples must be added to the multilayer perceptron (MLP) given by MLPHandleMLPHandleMLPHandleMLPHandleMLPHandlemlphandle by using add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlpAddSampleClassMlpadd_sample_class_mlp or read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlpReadSamplesClassMlpread_samples_class_mlp.

InformationContInformationContInformationContInformationContinformationContinformation_cont and CumInformationContCumInformationContCumInformationContCumInformationContcumInformationContcum_information_cont can be used to decide how many components of the transformed feature vectors contain relevant information. An often used criterion is to require that the transformed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value of CumInformationContCumInformationContCumInformationContCumInformationContcumInformationContcum_information_cont that lies above x%. The number thus obtained can be used as the value for NumComponents in a new call to create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpCreateClassMlpcreate_class_mlp. The call to get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp already requires the creation of an MLP, and hence the setting of NumComponents in create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpCreateClassMlpcreate_class_mlp to an initial value. However, if get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp is called it is typically not known how many components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step approach should typically be used to select NumComponents: In a first step, an MLP with the maximum number for NumComponents is created (NumInput for 'principal_components'"principal_components""principal_components""principal_components""principal_components""principal_components" and min(NumOutput - 1, NumInput) for 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates"). Then, the training samples are added to the MLP and are saved in a file using write_samples_class_mlpwrite_samples_class_mlpWriteSamplesClassMlpWriteSamplesClassMlpWriteSamplesClassMlpwrite_samples_class_mlp. Subsequently, get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp is used to determine the information content of the components, and with this NumComponents. After this, a new MLP with the desired number of components is created, and the training samples are read with read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlpReadSamplesClassMlpread_samples_class_mlp. Finally, the MLP is trained with train_class_mlptrain_class_mlpTrainClassMlpTrainClassMlpTrainClassMlptrain_class_mlp.

Execution Information

Parameters

MLPHandleMLPHandleMLPHandleMLPHandleMLPHandlemlphandle (input_control)  class_mlp HClassMlp, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

MLP handle.

PreprocessingPreprocessingPreprocessingPreprocessingpreprocessingpreprocessing (input_control)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Type of preprocessing used to transform the feature vectors.

Default value: 'principal_components' "principal_components" "principal_components" "principal_components" "principal_components" "principal_components"

List of values: 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates", 'principal_components'"principal_components""principal_components""principal_components""principal_components""principal_components"

InformationContInformationContInformationContInformationContinformationContinformation_cont (output_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Relative information content of the transformed feature vectors.

CumInformationContCumInformationContCumInformationContCumInformationContcumInformationContcum_information_cont (output_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Cumulative information content of the transformed feature vectors.

Example (HDevelop)

* Create the initial MLP
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
                  'principal_components', NumIn, 42, MLPHandle)
* Generate and add the training data
for J := 0 to NumData-1 by 1
    * Generate training features and classes
    * Data = [...]
    * Class = [...]
    add_sample_class_mlp (MLPHandle, Data, Class)
endfor
write_samples_class_mlp (MLPHandle, 'samples.mtf')
* Compute the information content of the transformed features
get_prep_info_class_mlp (MLPHandle, 'principal_components',\
                         InformationCont, CumInformationCont)
* Determine NumComp by inspecting InformationCont and CumInformationCont
* NumComp = [...]
* Create the actual MLP
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
                  'principal_components', NumComp, 42, MLPHandle)
* Train the MLP
read_samples_class_mlp (MLPHandle, 'samples.mtf')
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, 'classifier.mlp')

Result

If the parameters are valid, the operator get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp returns the value 2 (H_MSG_TRUE). If necessary an exception is raised.

get_prep_info_class_mlpget_prep_info_class_mlpGetPrepInfoClassMlpGetPrepInfoClassMlpGetPrepInfoClassMlpget_prep_info_class_mlp may return the error 9211 (Matrix is not positive definite) if PreprocessingPreprocessingPreprocessingPreprocessingpreprocessingpreprocessing = 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates""canonical_variates" is used. This typically indicates that not enough training samples have been stored for each class.

Possible Predecessors

add_sample_class_mlpadd_sample_class_mlpAddSampleClassMlpAddSampleClassMlpAddSampleClassMlpadd_sample_class_mlp, read_samples_class_mlpread_samples_class_mlpReadSamplesClassMlpReadSamplesClassMlpReadSamplesClassMlpread_samples_class_mlp

Possible Successors

clear_class_mlpclear_class_mlpClearClassMlpClearClassMlpClearClassMlpclear_class_mlp, create_class_mlpcreate_class_mlpCreateClassMlpCreateClassMlpCreateClassMlpcreate_class_mlp

References

Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.

Module

Foundation