API - Initializers¶
To make TensorLayerX simple, TensorLayerX only warps some basic initializers. For more complex activation, TensorFlow(MindSpore, PaddlePaddle, PyTorch) API will be required.
Initializer base class: all initializers inherit from this class. |
|
Initializer that generates tensors initialized to 0. |
|
Initializer that generates tensors initialized to 1. |
|
|
Initializer that generates tensors initialized to a constant value. |
|
Initializer that generates tensors with a uniform distribution. |
|
Initializer that generates tensors with a normal distribution. |
|
Initializer that generates a truncated normal distribution. |
|
He normal initializer. |
|
He uniform initializer. |
Returns the initializer that can be passed to DeConv2dLayer for initializing the weights in correspondence to channel-wise bilinear up-sampling. |
|
|
This class implements the Xavier weight initializer from the paper by Xavier Glorot and Yoshua Bengio.using a normal distribution. |
|
This class implements the Xavier weight initializer from the paper by Xavier Glorot and Yoshua Bengio.using a uniform distribution. |
Initializer¶
Zeros¶
Ones¶
Constant¶
-
class
tensorlayerx.nn.initializers.
Constant
(value=0)[source]¶ Initializer that generates tensors initialized to a constant value.
- Parameters:
value – A python scalar or a numpy array. The assigned value.
Examples
>>> import tensorlayerx as tlx >>> init = tlx.initializers.constant(value=10) >>> print(init(shape=(5, 10), dtype=tlx.float32))
RandomUniform¶
-
class
tensorlayerx.nn.initializers.
RandomUniform
(minval=-0.05, maxval=0.05, seed=None)[source]¶ Initializer that generates tensors with a uniform distribution.
- Parameters:
minval – A python scalar or a scalar tensor. Lower bound of the range of random values to generate.
maxval – A python scalar or a scalar tensor. Upper bound of the range of random values to generate.
seed – A Python integer. Used to seed the random generator.
- Examples :
>>> import tensorlayerx as tlx >>> init = tlx.initializers.random_uniform(minval=-0.05, maxval=0.05) >>> print(init(shape=(5, 10), dtype=tlx.float32))
RandomNormal¶
-
class
tensorlayerx.nn.initializers.
RandomNormal
(mean=0.0, stddev=0.05, seed=None)[source]¶ Initializer that generates tensors with a normal distribution.
- Parameters:
mean – A python scalar or a scalar tensor. Mean of the random values to generate.
stddev – A python scalar or a scalar tensor. Standard deviation of the random values to generate.
seed – A Python integer. Used to seed the random generator.
- Examples :
>>> import tensorlayerx as tlx >>> init = tlx.initializers.random_normal(mean=0.0, stddev=0.05) >>> print(init(shape=(5, 10), dtype=tlx.float32))
TruncatedNormal¶
-
class
tensorlayerx.nn.initializers.
TruncatedNormal
(mean=0.0, stddev=0.05, seed=None)[source]¶ Initializer that generates a truncated normal distribution.
These values are similar to values from a RandomNormal except that values more than two standard deviations from the mean are discarded and re-drawn. This is the recommended initializer for neural network weights and filters.
- Parameters:
mean – A python scalar or a scalar tensor. Mean of the random values to generate.
stddev – A python scalar or a scalar tensor. Standard deviation of the andom values to generate.
seed – A Python integer. Used to seed the random generator.
Examples
>>> import tensorlayerx as tlx >>> init = tlx.initializers.truncated_normal(mean=0.0, stddev=0.05) >>> print(init(shape=(5, 10), dtype=tlx.float32))
HeNormal¶
-
class
tensorlayerx.nn.initializers.
HeNormal
(a=0, mode='fan_in', nonlinearity='leaky_relu', seed=None)[source]¶ He normal initializer.
The resulting tensor will have values sampled from \(\mathcal{N}(0, \text{std}^2)\) where
\[\text{std} = \frac{\text{gain}}{\sqrt{fan\_mode}}\]- Parameters:
a – int or float the negative slope of the rectifier used after this layer (only used with
'leaky_relu'
)mode – str either
'fan_in'
(default) or'fan_out'
. Choosing'fan_in'
preserves the magnitude of the variance of the weights in the forward pass. Choosing'fan_out'
preserves the magnitudes in the backwards pass.nonlinearity – str the non-linear function name, recommended to use only with
'relu'
or'leaky_relu'
(default).seed – int Used to seed the random generator.
Examples
>>> import tensorlayerx as tlx >>> init = tlx.initializers.HeNormal(a=0, mode='fan_out', nonlinearity='relu') >>> print(init(shape=(5, 10), dtype=tlx.float32))
HeUniform¶
-
class
tensorlayerx.nn.initializers.
HeUniform
(a=0, mode='fan_in', nonlinearity='leaky_relu', seed=None)[source]¶ He uniform initializer. The resulting tensor will have values sampled from \(\mathcal{U}(-\text{bound},\text{bound})\) where
\[\text{bound} = \text{gain} \times \sqrt{\frac{3}{fan\_mode}}\]- Parameters:
a – int or float the negative slope of the rectifier used after this layer (only used with
'leaky_relu'
)mode – str either
'fan_in'
(default) or'fan_out'
. Choosing'fan_in'
preserves the magnitude of the variance of the weights in the forward pass. Choosing'fan_out'
preserves the magnitudes in the backwards pass.nonlinearity – str the non-linear function name, recommended to use only with
'relu'
or'leaky_relu'
(default).seed – int Used to seed the random generator.
Examples
>>> import tensorlayerx as tlx >>> init = tlx.initializers.HeUniform(a=0, mode='fan_in', nonlinearity='relu') >>> print(init(shape=(5, 10), dtype=tlx.float32))
deconv2d_bilinear_upsampling_initializer¶
-
tensorlayerx.nn.initializers.
deconv2d_bilinear_upsampling_initializer
(shape)[source]¶ Returns the initializer that can be passed to DeConv2dLayer for initializing the weights in correspondence to channel-wise bilinear up-sampling. Used in segmentation approaches such as [FCN](https://arxiv.org/abs/1605.06211)
- Parameters:
shape (tuple of int) – The shape of the filters, [height, width, output_channels, in_channels]. It must match the shape passed to DeConv2dLayer.
- Returns:
A constant initializer with weights set to correspond to per channel bilinear upsampling when passed as W_int in DeConv2dLayer
- Return type:
tf.constant_initializer
XavierNormal¶
-
class
tensorlayerx.nn.initializers.
XavierNormal
(gain=1.0, seed=None)[source]¶ This class implements the Xavier weight initializer from the paper by Xavier Glorot and Yoshua Bengio.using a normal distribution.
The resulting tensor will have values sampled from \(\mathcal{N}(0, \text{std}^2)\) where
\[\text{std} = \text{gain} \times \sqrt{\frac{2}{fan\_in + fan\_out}}\]- Parameters:
gain – float an optional scaling factor
seed – int Used to seed the random generator.
XavierUniform¶
-
class
tensorlayerx.nn.initializers.
XavierUniform
(gain=1.0, seed=None)[source]¶ This class implements the Xavier weight initializer from the paper by Xavier Glorot and Yoshua Bengio.using a uniform distribution.
The resulting tensor will have values sampled from \(\mathcal{U}(-a, a)\) where
\[\text{bound} = \text{gain} \times \sqrt{\frac{3}{fan\_mode}}\]- Parameters:
gain – float an optional scaling factor
seed – int Used to seed the random generator.