Name
create_ocr_class_svmT_create_ocr_class_svmCreateOcrClassSvmCreateOcrClassSvm — Create an OCR classifier using a support vector machine.
create_ocr_class_svm( : : WidthCharacter, HeightCharacter, Interpolation, Features, Characters, KernelType, KernelParam, Nu, Mode, Preprocessing, NumComponents : OCRHandle)
Herror T_create_ocr_class_svm(const Htuple WidthCharacter, const Htuple HeightCharacter, const Htuple Interpolation, const Htuple Features, const Htuple Characters, const Htuple KernelType, const Htuple KernelParam, const Htuple Nu, const Htuple Mode, const Htuple Preprocessing, const Htuple NumComponents, Htuple* OCRHandle)
void CreateOcrClassSvm(const HTuple& WidthCharacter, const HTuple& HeightCharacter, const HTuple& Interpolation, const HTuple& Features, const HTuple& Characters, const HTuple& KernelType, const HTuple& KernelParam, const HTuple& Nu, const HTuple& Mode, const HTuple& Preprocessing, const HTuple& NumComponents, HTuple* OCRHandle)
void HOCRSvm::HOCRSvm(Hlong WidthCharacter, Hlong HeightCharacter, const HString& Interpolation, const HTuple& Features, const HTuple& Characters, const HString& KernelType, double KernelParam, double Nu, const HString& Mode, const HString& Preprocessing, Hlong NumComponents)
void HOCRSvm::HOCRSvm(Hlong WidthCharacter, Hlong HeightCharacter, const HString& Interpolation, const HString& Features, const HTuple& Characters, const HString& KernelType, double KernelParam, double Nu, const HString& Mode, const HString& Preprocessing, Hlong NumComponents)
void HOCRSvm::HOCRSvm(Hlong WidthCharacter, Hlong HeightCharacter, const char* Interpolation, const char* Features, const HTuple& Characters, const char* KernelType, double KernelParam, double Nu, const char* Mode, const char* Preprocessing, Hlong NumComponents)
void HOCRSvm::CreateOcrClassSvm(Hlong WidthCharacter, Hlong HeightCharacter, const HString& Interpolation, const HTuple& Features, const HTuple& Characters, const HString& KernelType, double KernelParam, double Nu, const HString& Mode, const HString& Preprocessing, Hlong NumComponents)
void HOCRSvm::CreateOcrClassSvm(Hlong WidthCharacter, Hlong HeightCharacter, const HString& Interpolation, const HString& Features, const HTuple& Characters, const HString& KernelType, double KernelParam, double Nu, const HString& Mode, const HString& Preprocessing, Hlong NumComponents)
void HOCRSvm::CreateOcrClassSvm(Hlong WidthCharacter, Hlong HeightCharacter, const char* Interpolation, const char* Features, const HTuple& Characters, const char* KernelType, double KernelParam, double Nu, const char* Mode, const char* Preprocessing, Hlong NumComponents)
static void HOperatorSet.CreateOcrClassSvm(HTuple widthCharacter, HTuple heightCharacter, HTuple interpolation, HTuple features, HTuple characters, HTuple kernelType, HTuple kernelParam, HTuple nu, HTuple mode, HTuple preprocessing, HTuple numComponents, out HTuple OCRHandle)
public HOCRSvm(int widthCharacter, int heightCharacter, string interpolation, HTuple features, HTuple characters, string kernelType, double kernelParam, double nu, string mode, string preprocessing, int numComponents)
public HOCRSvm(int widthCharacter, int heightCharacter, string interpolation, string features, HTuple characters, string kernelType, double kernelParam, double nu, string mode, string preprocessing, int numComponents)
void HOCRSvm.CreateOcrClassSvm(int widthCharacter, int heightCharacter, string interpolation, HTuple features, HTuple characters, string kernelType, double kernelParam, double nu, string mode, string preprocessing, int numComponents)
void HOCRSvm.CreateOcrClassSvm(int widthCharacter, int heightCharacter, string interpolation, string features, HTuple characters, string kernelType, double kernelParam, double nu, string mode, string preprocessing, int numComponents)
create_ocr_class_svmcreate_ocr_class_svmCreateOcrClassSvmCreateOcrClassSvmCreateOcrClassSvm creates an OCR classifier that uses a
support vector machine (SVM). The handle of the OCR classifier is
returned in OCRHandleOCRHandleOCRHandleOCRHandleOCRHandle.
For a description on how an SVM works, see create_class_svmcreate_class_svmCreateClassSvmCreateClassSvmCreateClassSvm.
create_ocr_class_svmcreate_ocr_class_svmCreateOcrClassSvmCreateOcrClassSvmCreateOcrClassSvm creates an SVM for classification with
the classification mode given by ModeModeModeModemode. The length of the
feature vector of the SVM (NumFeatures in
create_class_svmcreate_class_svmCreateClassSvmCreateClassSvmCreateClassSvm) is determined from the features that are
used for the OCR, which are passed in FeaturesFeaturesFeaturesFeaturesfeatures. The
features are described below. The kernel is parametrized with
KernelTypeKernelTypeKernelTypeKernelTypekernelType, KernelParamKernelParamKernelParamKernelParamkernelParam and NuNuNuNunu like in
create_class_svmcreate_class_svmCreateClassSvmCreateClassSvmCreateClassSvm. The number of classes of the SVM
(NumClasses in create_class_svmcreate_class_svmCreateClassSvmCreateClassSvmCreateClassSvm) is determined from
the names of the characters to be used in the OCR, which are passed
in CharactersCharactersCharactersCharacterscharacters. As described with create_class_svmcreate_class_svmCreateClassSvmCreateClassSvmCreateClassSvm,
the parameters PreprocessingPreprocessingPreprocessingPreprocessingpreprocessing and NumComponentsNumComponentsNumComponentsNumComponentsnumComponents can
be used to specify a preprocessing of the data (i.e., the feature
vectors). For the sake of numerical stability, PreprocessingPreprocessingPreprocessingPreprocessingpreprocessing
can typically be set to 'normalization'"normalization""normalization""normalization""normalization". In order to speed
up classification time, 'principal_components'"principal_components""principal_components""principal_components""principal_components" or
'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates" can be used, as the number of input
features can be significantly reduced without deterioration of the
recognition rate.
The features to be used for the classification are determined by
FeaturesFeaturesFeaturesFeaturesfeatures. FeaturesFeaturesFeaturesFeaturesfeatures can contain a tuple of
feature names. Each of these feature names results in one or more
features to be calculated for the classifier. Some of the feature
names compute gray value features (e.g., 'pixel_invar'"pixel_invar""pixel_invar""pixel_invar""pixel_invar").
Because a classifier requires a constant number of features (input
variables), a character to be classified is transformed to a
standard size, which is determined by WidthCharacterWidthCharacterWidthCharacterWidthCharacterwidthCharacter and
HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter. The interpolation to be used for the
transformation is determined by InterpolationInterpolationInterpolationInterpolationinterpolation. It has the
same meaning as in affine_trans_imageaffine_trans_imageAffineTransImageAffineTransImageAffineTransImage. The interpolation
should be chosen such that no aliasing effects occur in the
transformation. For most applications, InterpolationInterpolationInterpolationInterpolationinterpolation =
'constant'"constant""constant""constant""constant" should be used. It should be noted that the
size of the transformed character is not chosen too large, because
the generalization properties of the classifier may become bad for
large sizes. In particular, for large sizes small segmentation errors
will have a large influence on the computed features if gray value features
are used. This happens because segmentation errors will change the smallest
enclosing rectangle of the regions, thus the character is zoomed differently
than the characters in the training set. In most applications, sizes
between 6x8 and 10x14 should be used.
The parameter FeaturesFeaturesFeaturesFeaturesfeatures can contain the following feature
names for the classification of the characters.
- 'default'"default""default""default""default"
-
'ratio'"ratio""ratio""ratio""ratio" and 'pixel_invar'"pixel_invar""pixel_invar""pixel_invar""pixel_invar" are selected.
- 'pixel'"pixel""pixel""pixel""pixel"
-
Gray values of the character (WidthCharacterWidthCharacterWidthCharacterWidthCharacterwidthCharacter x
HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter features).
- 'pixel_invar'"pixel_invar""pixel_invar""pixel_invar""pixel_invar"
-
Gray values of the character with maximum scaling of the gray
values (WidthCharacterWidthCharacterWidthCharacterWidthCharacterwidthCharacter x HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter
features).
- 'pixel_binary'"pixel_binary""pixel_binary""pixel_binary""pixel_binary"
-
Region of the character as a binary image zoomed to a size of
WidthCharacterWidthCharacterWidthCharacterWidthCharacterwidthCharacter x HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter
(WidthCharacterWidthCharacterWidthCharacterWidthCharacterwidthCharacter x HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter
features).
- 'gradient_8dir'"gradient_8dir""gradient_8dir""gradient_8dir""gradient_8dir"
-
Gradients are computed on the character image. The gradient
directions are discretized into 8 directions. The amplitude image
is decomposed into 8 channels according to these discretized
directions. 25 samples on a 5x5 grid are extracted from
each channel. These samples are used as features (200 features).
- 'projection_horizontal'"projection_horizontal""projection_horizontal""projection_horizontal""projection_horizontal"
-
Horizontal projection of the gray values (see
gray_projectionsgray_projectionsGrayProjectionsGrayProjectionsGrayProjections, HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter features).
- 'projection_horizontal_invar'"projection_horizontal_invar""projection_horizontal_invar""projection_horizontal_invar""projection_horizontal_invar"
-
Maximally scaled horizontal projection of the gray values
(HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter features).
- 'projection_vertical'"projection_vertical""projection_vertical""projection_vertical""projection_vertical"
-
Vertical projection of the gray values (see
gray_projectionsgray_projectionsGrayProjectionsGrayProjectionsGrayProjections, WidthCharacterWidthCharacterWidthCharacterWidthCharacterwidthCharacter features).
- 'projection_vertical_invar'"projection_vertical_invar""projection_vertical_invar""projection_vertical_invar""projection_vertical_invar"
-
Maximally scaled vertical projection of the gray values
(WidthCharacterWidthCharacterWidthCharacterWidthCharacterwidthCharacter features).
- 'ratio'"ratio""ratio""ratio""ratio"
-
Aspect ratio of the character (see
height_width_ratioheight_width_ratioHeightWidthRatioHeightWidthRatioHeightWidthRatio, 1 feature).
- 'anisometry'"anisometry""anisometry""anisometry""anisometry"
-
Anisometry of the character (see eccentricityeccentricityEccentricityEccentricityEccentricity, 1 feature).
- 'width'"width""width""width""width"
-
Width of the character before scaling the character to the
standard size (not scale-invariant, see
height_width_ratioheight_width_ratioHeightWidthRatioHeightWidthRatioHeightWidthRatio, 1 feature).
- 'height'"height""height""height""height"
-
Height of the character before scaling the character to the
standard size (not scale-invariant, see
height_width_ratioheight_width_ratioHeightWidthRatioHeightWidthRatioHeightWidthRatio, 1 feature).
- 'zoom_factor'"zoom_factor""zoom_factor""zoom_factor""zoom_factor"
-
Difference in size between the character and the values of
WidthCharacterWidthCharacterWidthCharacterWidthCharacterwidthCharacter and HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter (not
scale-invariant, 1 feature).
- 'foreground'"foreground""foreground""foreground""foreground"
-
Fraction of pixels in the foreground (1 feature).
- 'foreground_grid_9'"foreground_grid_9""foreground_grid_9""foreground_grid_9""foreground_grid_9"
-
Fraction of pixels in the foreground in a 3x3 grid within
the smallest enclosing rectangle of the character (9 features).
- 'foreground_grid_16'"foreground_grid_16""foreground_grid_16""foreground_grid_16""foreground_grid_16"
-
Fraction of pixels in the foreground in a 4x4 grid within
the smallest enclosing rectangle of the character (16 features).
- 'compactness'"compactness""compactness""compactness""compactness"
-
Compactness of the character (see compactnesscompactnessCompactnessCompactnessCompactness, 1 feature).
- 'convexity'"convexity""convexity""convexity""convexity"
-
Convexity of the character (see convexityconvexityConvexityConvexityConvexity, 1 feature).
- 'moments_region_2nd_invar'"moments_region_2nd_invar""moments_region_2nd_invar""moments_region_2nd_invar""moments_region_2nd_invar"
-
Normalized 2nd moments of the character (see
moments_region_2nd_invarmoments_region_2nd_invarMomentsRegion2ndInvarMomentsRegion2ndInvarMomentsRegion2ndInvar, 3 features).
- 'moments_region_2nd_rel_invar'"moments_region_2nd_rel_invar""moments_region_2nd_rel_invar""moments_region_2nd_rel_invar""moments_region_2nd_rel_invar"
-
Normalized 2nd relative moments of the character (see
moments_region_2nd_rel_invarmoments_region_2nd_rel_invarMomentsRegion2ndRelInvarMomentsRegion2ndRelInvarMomentsRegion2ndRelInvar, 2 features).
- 'moments_region_3rd_invar'"moments_region_3rd_invar""moments_region_3rd_invar""moments_region_3rd_invar""moments_region_3rd_invar"
-
Normalized 3rd moments of the character (see
moments_region_3rd_invarmoments_region_3rd_invarMomentsRegion3rdInvarMomentsRegion3rdInvarMomentsRegion3rdInvar, 4 features).
- 'moments_central'"moments_central""moments_central""moments_central""moments_central"
-
Normalized central moments of the character (see
moments_region_centralmoments_region_centralMomentsRegionCentralMomentsRegionCentralMomentsRegionCentral, 4 features).
- 'moments_gray_plane'"moments_gray_plane""moments_gray_plane""moments_gray_plane""moments_gray_plane"
-
Normalized gray value moments and the angle of the gray value
plane (see moments_gray_planemoments_gray_planeMomentsGrayPlaneMomentsGrayPlaneMomentsGrayPlane, 4 features).
- 'phi'"phi""phi""phi""phi"
-
Orientation (angle) of the character (see elliptic_axiselliptic_axisEllipticAxisEllipticAxisEllipticAxis, 1
feature).
- 'num_connect'"num_connect""num_connect""num_connect""num_connect"
-
Number of connected components (see connect_and_holesconnect_and_holesConnectAndHolesConnectAndHolesConnectAndHoles, 1
feature).
- 'num_holes'"num_holes""num_holes""num_holes""num_holes"
-
Number of holes (see connect_and_holesconnect_and_holesConnectAndHolesConnectAndHolesConnectAndHoles, 1 feature).
- 'cooc'"cooc""cooc""cooc""cooc"
-
Values of the binary cooccurrence matrix (see
gen_cooc_matrixgen_cooc_matrixGenCoocMatrixGenCoocMatrixGenCoocMatrix, 12 features).
- 'num_runs'"num_runs""num_runs""num_runs""num_runs"
-
Number of runs in the region normalized by the height (1 feature).
- 'chord_histo'"chord_histo""chord_histo""chord_histo""chord_histo"
-
Frequency of the runs per row (not scale-invariant, HeightCharacterHeightCharacterHeightCharacterHeightCharacterheightCharacter features).
After the classifier has been created, it is trained using
trainf_ocr_class_svmtrainf_ocr_class_svmTrainfOcrClassSvmTrainfOcrClassSvmTrainfOcrClassSvm. After this, the classifier can be
saved using write_ocr_class_svmwrite_ocr_class_svmWriteOcrClassSvmWriteOcrClassSvmWriteOcrClassSvm. Alternatively, the
classifier can be used immediately after training to classify
characters using do_ocr_single_class_svmdo_ocr_single_class_svmDoOcrSingleClassSvmDoOcrSingleClassSvmDoOcrSingleClassSvm or
do_ocr_multi_class_svmdo_ocr_multi_class_svmDoOcrMultiClassSvmDoOcrMultiClassSvmDoOcrMultiClassSvm.
A comparison of SVM and the multi-layer perceptron (MLP) (see
create_ocr_class_mlpcreate_ocr_class_mlpCreateOcrClassMlpCreateOcrClassMlpCreateOcrClassMlp) typically shows that SVMs are
generally faster at training, especially for huge training sets, and
achieve slightly better recognition rates than MLPs. The MLP is
faster at classification and should therefore be preferred in time
critical applications. Please note that this guideline assumes
optimal tuning of the parameters.
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.
Width of the rectangle to which the gray values
of the segmented character are zoomed.
Default value: 8
Suggested values: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20
Typical range of values: 4
≤
WidthCharacter
WidthCharacter
WidthCharacter
WidthCharacter
widthCharacter
≤
20
Height of the rectangle to which the gray values
of the segmented character are zoomed.
Default value: 10
Suggested values: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14, 16, 20
Typical range of values: 4
≤
HeightCharacter
HeightCharacter
HeightCharacter
HeightCharacter
heightCharacter
≤
20
Interpolation mode for the zooming of the characters.
Default value:
'constant'
"constant"
"constant"
"constant"
"constant"
List of values: 'bicubic'"bicubic""bicubic""bicubic""bicubic", 'bilinear'"bilinear""bilinear""bilinear""bilinear", 'constant'"constant""constant""constant""constant", 'nearest_neighbor'"nearest_neighbor""nearest_neighbor""nearest_neighbor""nearest_neighbor", 'weighted'"weighted""weighted""weighted""weighted"
Features to be used for classification.
Default value:
'default'
"default"
"default"
"default"
"default"
List of values: 'anisometry'"anisometry""anisometry""anisometry""anisometry", 'chord_histo'"chord_histo""chord_histo""chord_histo""chord_histo", 'compactness'"compactness""compactness""compactness""compactness", 'convexity'"convexity""convexity""convexity""convexity", 'cooc'"cooc""cooc""cooc""cooc", 'default'"default""default""default""default", 'foreground'"foreground""foreground""foreground""foreground", 'foreground_grid_16'"foreground_grid_16""foreground_grid_16""foreground_grid_16""foreground_grid_16", 'foreground_grid_9'"foreground_grid_9""foreground_grid_9""foreground_grid_9""foreground_grid_9", 'gradient_8dir'"gradient_8dir""gradient_8dir""gradient_8dir""gradient_8dir", 'height'"height""height""height""height", 'moments_central'"moments_central""moments_central""moments_central""moments_central", 'moments_gray_plane'"moments_gray_plane""moments_gray_plane""moments_gray_plane""moments_gray_plane", 'moments_region_2nd_invar'"moments_region_2nd_invar""moments_region_2nd_invar""moments_region_2nd_invar""moments_region_2nd_invar", 'moments_region_2nd_rel_invar'"moments_region_2nd_rel_invar""moments_region_2nd_rel_invar""moments_region_2nd_rel_invar""moments_region_2nd_rel_invar", 'moments_region_3rd_invar'"moments_region_3rd_invar""moments_region_3rd_invar""moments_region_3rd_invar""moments_region_3rd_invar", 'num_connect'"num_connect""num_connect""num_connect""num_connect", 'num_holes'"num_holes""num_holes""num_holes""num_holes", 'num_runs'"num_runs""num_runs""num_runs""num_runs", 'phi'"phi""phi""phi""phi", 'pixel'"pixel""pixel""pixel""pixel", 'pixel_binary'"pixel_binary""pixel_binary""pixel_binary""pixel_binary", 'pixel_invar'"pixel_invar""pixel_invar""pixel_invar""pixel_invar", 'projection_horizontal'"projection_horizontal""projection_horizontal""projection_horizontal""projection_horizontal", 'projection_horizontal_invar'"projection_horizontal_invar""projection_horizontal_invar""projection_horizontal_invar""projection_horizontal_invar", 'projection_vertical'"projection_vertical""projection_vertical""projection_vertical""projection_vertical", 'projection_vertical_invar'"projection_vertical_invar""projection_vertical_invar""projection_vertical_invar""projection_vertical_invar", 'ratio'"ratio""ratio""ratio""ratio", 'width'"width""width""width""width", 'zoom_factor'"zoom_factor""zoom_factor""zoom_factor""zoom_factor"
All characters of the character set to be read.
Default value:
['0','1','2','3','4','5','6','7','8','9']
["0","1","2","3","4","5","6","7","8","9"]
["0","1","2","3","4","5","6","7","8","9"]
["0","1","2","3","4","5","6","7","8","9"]
["0","1","2","3","4","5","6","7","8","9"]
The kernel type.
Default value:
'rbf'
"rbf"
"rbf"
"rbf"
"rbf"
List of values: 'linear'"linear""linear""linear""linear", 'polynomial_homogeneous'"polynomial_homogeneous""polynomial_homogeneous""polynomial_homogeneous""polynomial_homogeneous", 'polynomial_inhomogeneous'"polynomial_inhomogeneous""polynomial_inhomogeneous""polynomial_inhomogeneous""polynomial_inhomogeneous", 'rbf'"rbf""rbf""rbf""rbf"
Additional parameter for the kernel function.
Default value: 0.02
Suggested values: 0.01, 0.02, 0.05, 0.1, 0.5
Regularization constant of the SVM.
Default value: 0.05
Suggested values: 0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3
Restriction: Nu > 0.0 && Nu < 1.0
The mode of the SVM.
Default value:
'one-versus-one'
"one-versus-one"
"one-versus-one"
"one-versus-one"
"one-versus-one"
List of values: 'one-versus-all'"one-versus-all""one-versus-all""one-versus-all""one-versus-all", 'one-versus-one'"one-versus-one""one-versus-one""one-versus-one""one-versus-one"
Type of preprocessing used to transform the
feature vectors.
Default value:
'normalization'
"normalization"
"normalization"
"normalization"
"normalization"
List of values: 'canonical_variates'"canonical_variates""canonical_variates""canonical_variates""canonical_variates", 'none'"none""none""none""none", 'normalization'"normalization""normalization""normalization""normalization", 'principal_components'"principal_components""principal_components""principal_components""principal_components"
Preprocessing parameter: Number of transformed
features (ignored for PreprocessingPreprocessingPreprocessingPreprocessingpreprocessing =
'none'"none""none""none""none" and PreprocessingPreprocessingPreprocessingPreprocessingpreprocessing =
'normalization'"normalization""normalization""normalization""normalization").
Default value: 10
Suggested values: 1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100
Restriction: NumComponents >= 1
Handle of the OCR classifier.
read_image (Image, 'letters')
* Segment the image.
binary_threshold(Image,&Region, 'otsu', 'dark', &UsedThreshold);
dilation_circle (Region, RegionDilation, 3.5)
connection (RegionDilation, ConnectedRegions)
intersection (ConnectedRegions, Region, RegionIntersection)
sort_region (RegionIntersection, Characters, 'character', 'true', 'row')
* Generate the training file.
count_obj (Characters, Number)
Classes := []
for J := 0 to 25 by 1
Classes := [Classes,gen_tuple_const(20,chr(ord('a')+J))]
endfor
Classes := [Classes,gen_tuple_const(20,'.')]
write_ocr_trainf (Characters, Image, Classes, 'letters.trf')
* Generate and train the classifier.
read_ocr_trainf_names ('letters.trf', CharacterNames, CharacterCount)
create_ocr_class_svm (8, 10, 'constant', 'default', CharacterNames, \
'rbf', 0.01, 0.01, 'one-versus-all', \
'principal_components', 10, OCRHandle)
trainf_ocr_class_svm (OCRHandle, 'letters.trf', 0.001, 'default')
* Re-classify the characters in the image.
do_ocr_multi_class_svm (Characters, Image, OCRHandle, Class)
clear_ocr_class_svm (OCRHandle)
If the parameters are valid the operator
create_ocr_class_svmcreate_ocr_class_svmCreateOcrClassSvmCreateOcrClassSvmCreateOcrClassSvm returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
trainf_ocr_class_svmtrainf_ocr_class_svmTrainfOcrClassSvmTrainfOcrClassSvmTrainfOcrClassSvm
create_ocr_class_mlpcreate_ocr_class_mlpCreateOcrClassMlpCreateOcrClassMlpCreateOcrClassMlp
do_ocr_single_class_svmdo_ocr_single_class_svmDoOcrSingleClassSvmDoOcrSingleClassSvmDoOcrSingleClassSvm,
do_ocr_multi_class_svmdo_ocr_multi_class_svmDoOcrMultiClassSvmDoOcrMultiClassSvmDoOcrMultiClassSvm,
clear_ocr_class_svmclear_ocr_class_svmClearOcrClassSvmClearOcrClassSvmClearOcrClassSvm,
create_class_svmcreate_class_svmCreateClassSvmCreateClassSvmCreateClassSvm,
train_class_svmtrain_class_svmTrainClassSvmTrainClassSvmTrainClassSvm,
classify_class_svmclassify_class_svmClassifyClassSvmClassifyClassSvmClassifyClassSvm
OCR/OCV