create_dl_layer_batch_normalization T_create_dl_layer_batch_normalization CreateDlLayerBatchNormalization CreateDlLayerBatchNormalization create_dl_layer_batch_normalization  (Operator) 
Name 
create_dl_layer_batch_normalization T_create_dl_layer_batch_normalization CreateDlLayerBatchNormalization CreateDlLayerBatchNormalization create_dl_layer_batch_normalization 
Signature 
void CreateDlLayerBatchNormalization (const HTuple& DLLayerInput LayerName Momentum Epsilon Activation GenParamName GenParamValue DLLayerBatchNorm 
HDlLayer  HDlLayer ::CreateDlLayerBatchNormalization (const HString& LayerName Momentum Epsilon Activation GenParamName GenParamValue 
HDlLayer  HDlLayer ::CreateDlLayerBatchNormalization (const HString& LayerName Momentum Epsilon Activation GenParamName GenParamValue 
HDlLayer  HDlLayer ::CreateDlLayerBatchNormalization (const char* LayerName Momentum Epsilon Activation GenParamName GenParamValue 
HDlLayer  HDlLayer ::CreateDlLayerBatchNormalization (const wchar_t* LayerName Momentum Epsilon Activation GenParamName GenParamValue 
            (
            Windows only)
           
 
static void HOperatorSet .CreateDlLayerBatchNormalization (HTuple  DLLayerInput HTuple  layerName HTuple  momentum HTuple  epsilon HTuple  activation HTuple  genParamName HTuple  genParamValue HTuple  DLLayerBatchNorm 
HDlLayer  HDlLayer .CreateDlLayerBatchNormalization (string layerName HTuple  momentum epsilon activation HTuple  genParamName HTuple  genParamValue 
HDlLayer  HDlLayer .CreateDlLayerBatchNormalization (string layerName momentum epsilon activation genParamName genParamValue 
 
Description 
The operator create_dl_layer_batch_normalization create_dl_layer_batch_normalization CreateDlLayerBatchNormalization CreateDlLayerBatchNormalization create_dl_layer_batch_normalization DLLayerBatchNorm DLLayerBatchNorm DLLayerBatchNorm DLLayerBatchNorm dllayer_batch_norm Momentum Momentum Momentum momentum momentum 
  
    
       
   
  
     
 
  
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
    
       
   
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
  
     
 Momentum Momentum Momentum momentum momentum 
Given number: 
 For example: 0.9 . This is the default and recommended option.
Restriction:
       0  
  
    
       
   
  
     
 Momentum Momentum Momentum momentum momentum 
  
    
       
   
  
     
 1 
 
'auto' "auto" "auto" "auto" "auto" 
 Combines mean and variance values by a
cumulative moving average. This is only recommended in case the parameters of all previous
layers in the network are frozen, i.e., have a learning rate of 0 .
'freeze' "freeze" "freeze" "freeze" "freeze" 
 Stops the adjustment of the mean and
variance and their values stay fixed. In this case, the mean and variance are used
during training for normalizing a batch, analogously to how the batch normalization
operates during inference. The parameters of the linear scale and shift
transformation, however, remain learnable.
 
Epsilon Epsilon Epsilon epsilon epsilon 
The parameter DLLayerInput DLLayerInput DLLayerInput DLLayerInput dllayer_input 
The parameter LayerName LayerName LayerName layerName layer_name create_dl_model create_dl_model CreateDlModel CreateDlModel create_dl_model 
The parameter Activation Activation Activation activation activation 
It is not possible to specify a leaky ReLU or a sigmoid activation function.
Use create_dl_layer_activation create_dl_layer_activation CreateDlLayerActivation CreateDlLayerActivation create_dl_layer_activation 
The following generic parameters GenParamName GenParamName GenParamName genParamName gen_param_name GenParamValue GenParamValue GenParamValue genParamValue gen_param_value 
'bias_filler' "bias_filler" "bias_filler" "bias_filler" "bias_filler" 
See create_dl_layer_convolution create_dl_layer_convolution CreateDlLayerConvolution CreateDlLayerConvolution create_dl_layer_convolution 
List of values:
       'xavier' "xavier" "xavier" "xavier" "xavier" 'msra' "msra" "msra" "msra" "msra" 'const' "const" "const" "const" "const" 
Default:
       'const' "const" "const" "const" "const" 
 
'bias_filler_const_val' "bias_filler_const_val" "bias_filler_const_val" "bias_filler_const_val" "bias_filler_const_val" 
 Constant value.
Restriction:
       'bias_filler' "bias_filler" "bias_filler" "bias_filler" "bias_filler" 'const' "const" "const" "const" "const" 
Default:
       0 
 
'bias_filler_variance_norm' "bias_filler_variance_norm" "bias_filler_variance_norm" "bias_filler_variance_norm" "bias_filler_variance_norm" 
See create_dl_layer_convolution create_dl_layer_convolution CreateDlLayerConvolution CreateDlLayerConvolution create_dl_layer_convolution 
List of values:
       'norm_out' "norm_out" "norm_out" "norm_out" "norm_out" 'norm_in' "norm_in" "norm_in" "norm_in" "norm_in" 'norm_average' "norm_average" "norm_average" "norm_average" "norm_average" 'bias_filler' "bias_filler" "bias_filler" "bias_filler" "bias_filler" 'msra' "msra" "msra" "msra" "msra" 
Default:
       'norm_out' "norm_out" "norm_out" "norm_out" "norm_out" 
 
'bias_term' "bias_term" "bias_term" "bias_term" "bias_term" 
 Determines whether the created
batch normalization layer has a bias term ('true' "true" "true" "true" "true" 'false' "false" "false" "false" "false" 
Default:
       'true' "true" "true" "true" "true" 
 
'is_inference_output' "is_inference_output" "is_inference_output" "is_inference_output" "is_inference_output" 
Determines whether apply_dl_model apply_dl_model ApplyDlModel ApplyDlModel apply_dl_model DLResultBatch DLResultBatch DLResultBatch DLResultBatch dlresult_batch Outputs Outputs Outputs outputs outputs 'true' "true" "true" "true" "true" 'false' "false" "false" "false" "false" 
Default:
       'false' "false" "false" "false" "false" 
 
'learning_rate_multiplier' "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" 
 Multiplier for the learning
rate for this layer that is used during training.
If 'learning_rate_multiplier' "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" 0.0 , the layer is
skipped during training.
Default:
       1.0 
 
'learning_rate_multiplier_bias' "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" 
 Multiplier for the
learning rate of the bias term. The total bias learning rate is the
product of 'learning_rate_multiplier_bias' "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" 'learning_rate_multiplier' "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" 
Default:
       1.0 
 
'upper_bound' "upper_bound" "upper_bound" "upper_bound" "upper_bound" 
 Float value defining an upper bound
for a rectified linear unit.
If the activation layer is part of a model, which has been created using
create_dl_model create_dl_model CreateDlModel CreateDlModel create_dl_model set_dl_model_layer_param set_dl_model_layer_param SetDlModelLayerParam SetDlModelLayerParam set_dl_model_layer_param 'upper_bound' "upper_bound" "upper_bound" "upper_bound" "upper_bound" 
Default:
       [] 
 
'weight_filler' "weight_filler" "weight_filler" "weight_filler" "weight_filler" 
See create_dl_layer_convolution create_dl_layer_convolution CreateDlLayerConvolution CreateDlLayerConvolution create_dl_layer_convolution 
List of values:
       'xavier' "xavier" "xavier" "xavier" "xavier" 'msra' "msra" "msra" "msra" "msra" 'const' "const" "const" "const" "const" 
Default:
       'const' "const" "const" "const" "const" 
 
'weight_filler_const_val' "weight_filler_const_val" "weight_filler_const_val" "weight_filler_const_val" "weight_filler_const_val" 
See create_dl_layer_convolution create_dl_layer_convolution CreateDlLayerConvolution CreateDlLayerConvolution create_dl_layer_convolution 
Default:
       1.0 
 
'weight_filler_variance_norm' "weight_filler_variance_norm" "weight_filler_variance_norm" "weight_filler_variance_norm" "weight_filler_variance_norm" 
See create_dl_layer_convolution create_dl_layer_convolution CreateDlLayerConvolution CreateDlLayerConvolution create_dl_layer_convolution 
List of values:
       'norm_in' "norm_in" "norm_in" "norm_in" "norm_in" 'norm_out' "norm_out" "norm_out" "norm_out" "norm_out" 'norm_average' "norm_average" "norm_average" "norm_average" "norm_average" 'weight_filler' "weight_filler" "weight_filler" "weight_filler" "weight_filler" 'msra' "msra" "msra" "msra" "msra" 
Default:
       'norm_in' "norm_in" "norm_in" "norm_in" "norm_in" 
 
 
Certain parameters of layers created using this operator
create_dl_layer_batch_normalization create_dl_layer_batch_normalization CreateDlLayerBatchNormalization CreateDlLayerBatchNormalization create_dl_layer_batch_normalization set_dl_model_layer_param set_dl_model_layer_param SetDlModelLayerParam SetDlModelLayerParam set_dl_model_layer_param get_dl_model_layer_param get_dl_model_layer_param GetDlModelLayerParam GetDlModelLayerParam get_dl_model_layer_param get_dl_layer_param get_dl_layer_param GetDlLayerParam GetDlLayerParam get_dl_layer_param set_dl_model_layer_param set_dl_model_layer_param SetDlModelLayerParam SetDlModelLayerParam set_dl_model_layer_param get_dl_model_layer_param get_dl_model_layer_param GetDlModelLayerParam GetDlModelLayerParam get_dl_model_layer_param create_dl_model create_dl_model CreateDlModel CreateDlModel create_dl_model 
Generic Layer Parameters                        
 set           
 get  
 
'bias_filler' "bias_filler" "bias_filler" "bias_filler" "bias_filler"  
 x             
 x
 
 
'bias_filler_const_val' "bias_filler_const_val" "bias_filler_const_val" "bias_filler_const_val" "bias_filler_const_val"  
 x             
 x
 
 
'bias_filler_variance_norm' "bias_filler_variance_norm" "bias_filler_variance_norm" "bias_filler_variance_norm" "bias_filler_variance_norm"  
 x             
 x
 
 
'bias_term' "bias_term" "bias_term" "bias_term" "bias_term"  
                        
 x
 
 
'is_inference_output' "is_inference_output" "is_inference_output" "is_inference_output" "is_inference_output"  
 x             
 x
 
 
'learning_rate_multiplier' "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier"  
 x             
 x
 
 
'learning_rate_multiplier_bias' "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" "learning_rate_multiplier_bias"  
 x             
 x
 
 
'num_trainable_params' "num_trainable_params" "num_trainable_params" "num_trainable_params" "num_trainable_params"  
                        
 x
 
 
'upper_bound' "upper_bound" "upper_bound" "upper_bound" "upper_bound"  
 x             
 x
 
 
'weight_filler' "weight_filler" "weight_filler" "weight_filler" "weight_filler"  
 x             
 x
 
 
'weight_filler_const_val' "weight_filler_const_val" "weight_filler_const_val" "weight_filler_const_val" "weight_filler_const_val"  
 x             
 x
 
 
'weight_filler_variance_norm' "weight_filler_variance_norm" "weight_filler_variance_norm" "weight_filler_variance_norm" "weight_filler_variance_norm"  
 x             
 x
 
 
Execution Information 
  Multithreading type: reentrant (runs in parallel with non-exclusive operators). 
Multithreading scope: global (may be called from any thread). 
  Processed without parallelization. 
 
Parameters 
  
DLLayerInput DLLayerInput DLLayerInput DLLayerInput dllayer_input dl_layer → HDlLayer , HTuple HHandle HTuple Htuple  (handle)  (IntPtr )  (HHandle )  (handle )  
 
Feeding layer.
 
  
LayerName LayerName LayerName layerName layer_name string → HTuple str HTuple Htuple  (string)  (string )  (HString )  (char* )  
 
Name of the output layer.
 
  
Momentum Momentum Momentum momentum momentum string → HTuple Union[float, str] HTuple Htuple  (string /  real)  (string  /  double)  (HString  /  double)  (char*  /  double)  
 
Momentum.
Default:
       0.9
List of values:
       0.9, 0.99, 0.999, 'auto' "auto" "auto" "auto" "auto" , 'freeze' "freeze" "freeze" "freeze" "freeze" 
 
  
Epsilon Epsilon Epsilon epsilon epsilon number → HTuple float HTuple Htuple  (real)  (double )  (double )  (double )  
 
Variance offset.
Default:
       0.0001
 
  
Activation Activation Activation activation activation string → HTuple str HTuple Htuple  (string)  (string )  (HString )  (char* )  
 
Optional activation function.
Default:
       
    'none' 
    "none" 
    "none" 
    "none" 
    "none" 
List of values:
       'none' "none" "none" "none" "none" , 'relu' "relu" "relu" "relu" "relu" 
 
  
GenParamName GenParamName GenParamName genParamName gen_param_name attribute.name(-array) → HTuple MaybeSequence[str] HTuple Htuple  (string)  (string )  (HString )  (char* )  
 
Generic input parameter names.
Default:
       []
List of values:
       'bias_filler' "bias_filler" "bias_filler" "bias_filler" "bias_filler" , 'bias_filler_const_val' "bias_filler_const_val" "bias_filler_const_val" "bias_filler_const_val" "bias_filler_const_val" , 'bias_filler_variance_norm' "bias_filler_variance_norm" "bias_filler_variance_norm" "bias_filler_variance_norm" "bias_filler_variance_norm" , 'bias_term' "bias_term" "bias_term" "bias_term" "bias_term" , 'is_inference_output' "is_inference_output" "is_inference_output" "is_inference_output" "is_inference_output" , 'learning_rate_multiplier' "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" "learning_rate_multiplier" , 'learning_rate_multiplier_bias' "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" "learning_rate_multiplier_bias" , 'upper_bound' "upper_bound" "upper_bound" "upper_bound" "upper_bound" , 'weight_filler' "weight_filler" "weight_filler" "weight_filler" "weight_filler" , 'weight_filler_const_val' "weight_filler_const_val" "weight_filler_const_val" "weight_filler_const_val" "weight_filler_const_val" , 'weight_filler_variance_norm' "weight_filler_variance_norm" "weight_filler_variance_norm" "weight_filler_variance_norm" "weight_filler_variance_norm" 
 
  
GenParamValue GenParamValue GenParamValue genParamValue gen_param_value attribute.value(-array) → HTuple MaybeSequence[Union[int, float, str]] HTuple Htuple  (string /  integer /  real)  (string  /  int /  long /  double)  (HString  /  Hlong /  double)  (char*  /  Hlong /  double)  
 
Generic input parameter values.
Default:
       []
Suggested values:
       'xavier' "xavier" "xavier" "xavier" "xavier" , 'msra' "msra" "msra" "msra" "msra" , 'const' "const" "const" "const" "const" , 'nearest_neighbor' "nearest_neighbor" "nearest_neighbor" "nearest_neighbor" "nearest_neighbor" , 'bilinear' "bilinear" "bilinear" "bilinear" "bilinear" , 'norm_in' "norm_in" "norm_in" "norm_in" "norm_in" , 'norm_out' "norm_out" "norm_out" "norm_out" "norm_out" , 'norm_average' "norm_average" "norm_average" "norm_average" "norm_average" , 'true' "true" "true" "true" "true" , 'false' "false" "false" "false" "false" , 1.0, 0.9, 0.0
 
  
DLLayerBatchNorm DLLayerBatchNorm DLLayerBatchNorm DLLayerBatchNorm dllayer_batch_norm dl_layer → HDlLayer , HTuple HHandle HTuple Htuple  (handle)  (IntPtr )  (HHandle )  (handle )  
 
Batch normalization layer.
 
Example (HDevelop) 
create_dl_layer_input ('input', [224,224,3], [], [], DLLayerInput)
* In practice, one typically sets ['bias_term'], ['false'] for a convolution
* that is directly followed by a batch normalization layer.
create_dl_layer_convolution (DLLayerInput, 'conv1', 3, 1, 1, 64, 1, \
                             'none', 'none', ['bias_term'], ['false'], \
                             DLLayerConvolution)
create_dl_layer_batch_normalization (DLLayerConvolution, 'bn1', 0.9, \
                                     0.0001, 'none', [], [], \
                                     DLLayerBatchNorm)
 
Possible Predecessors 
create_dl_layer_convolution create_dl_layer_convolution CreateDlLayerConvolution CreateDlLayerConvolution create_dl_layer_convolution 
Possible Successors 
create_dl_layer_activation create_dl_layer_activation CreateDlLayerActivation CreateDlLayerActivation create_dl_layer_activation create_dl_layer_convolution create_dl_layer_convolution CreateDlLayerConvolution CreateDlLayerConvolution create_dl_layer_convolution 
References 
Sergey Ioffe and Christian Szegedy,
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal
Covariate Shift,"
Proceedings of the 32nd International Conference on Machine Learning,
(ICML) 2015, Lille, France, 6-11 July 2015, pp. 448--456
Module 
Deep Learning Professional