jittor.init

这里是Jittor的参数初始化模块的API文档,您可以通过from jittor import init来获取该模块。

jittor.init.calculate_gain(nonlinearity, param=None)[源代码]

Return the recommended gain value for the given nonlinearity function. The values are as follows:

nonlinearity

gain

Linear / Identity

\(1\)

Conv{1,2,3}D

\(1\)

Sigmoid

\(1\)

Tanh

\(\frac{5}{3}\)

ReLU

\(\sqrt{2}\)

Leaky Relu

\(\sqrt{\frac{2}{1 + \text{negative\_slope}^2}}\)

SELU

\(\frac{3}{4}\)

Args:

nonlinearity: the non-linear function (nn.functional name) param: optional parameter for the non-linear function

Examples:
>>> gain = nn.init.calculate_gain('leaky_relu', 0.2)  # leaky_relu with negative_slope=0.2
jittor.init.calculate_std(var, mode, nonlinearity, param=0.01)[源代码]
jittor.init.constant(shape, dtype='float32', value=0.0)[源代码]

Generate constant Jittor Var.

Args:
shape (int or tuple of int):

shape of the output Var

dtype (string):

dtype of the output Var, default float32

value (int or float):

value to be filled in output Var

Return:

A Jittor Var which filled by constant value.

Example:

from jittor import init
print(init.constant(2))
# output: [0.,0.]
print(init.constant((2,3), value=1.))
# output: [[1.,1.,1.],[1.,1.,1.]]
jittor.init.constant_(var, value=0.0)[源代码]

Inplace initialize variable with constant value.

Args:
var (Jittor Var):

Var to initialize with constant value.

Return:

var itself.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.constant_(linear.weight)
print(linear.weight)
# output: [[0.,0.],[0.,0.]]
linear.weight.constant_() # This is ok too
jittor.init.eye(shape, dtype='float32')[源代码]

Generate 2-D identity matrix.

Args:
shape (int or tuple of int):

shape of the output matrix

dtype (string):

dtype of the output matrix, default float32

Return:

A Jittor Var of identity matrix.

Example:

from jittor import init
print(init.eye(2))
# output: [[1.,0.],[0.,1.]]
print(init.eye((2,3), "float32"))
# output: [[1.,0.,0.],[0.,1.,0.]]
jittor.init.eye_(var)[源代码]

Inplace initialize variable with identity matrix.

Args:
var (Jittor Var):

Var to initialize with identity matrix.

Return:

var itself.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.eye_(linear.weight)
print(linear.weight)
# output: [[1.,0.],[0.,1.]]
linear.weight.eye_() # This is ok too
jittor.init.fill(var, value=0.0)

Inplace initialize variable with constant value.

Args:
var (Jittor Var):

Var to initialize with constant value.

Return:

var itself.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.constant_(linear.weight)
print(linear.weight)
# output: [[0.,0.],[0.,0.]]
linear.weight.constant_() # This is ok too
jittor.init.gauss(shape, dtype='float32', mean=0.0, std=1.0)[源代码]

Return Jittor Var initialize by random gauss.

Args:
shape (int or tuple of int):

shape of the output Var

dtype (string):

dtype of the output Var, default float32

mean (int or float or Var):

mean value of the random gauss

std (int or float or Var):

std value of the random gauss

Example:

from jittor import init
from jittor import nn
a = init.gauss((2,2), "float32", 0.0, 1.0)
print(a)
jittor.init.gauss_(var, mean=0.0, std=1.0)[源代码]

Inplace initialize Jittor Var by random gauss.

Args:
var (Jittor Var):

Var to be initialized by random gauss

mean (int or float or Var):

mean value of the random gauss

std (int or float or Var):

std value of the random gauss

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.gauss_(linear.weight, 0.0, 1.0)
print(linear.weight)
linear.weight.gauss_(0.0, 1.0) # This is ok too
jittor.init.invariant_uniform(shape, dtype='float32', mode='fan_in')[源代码]

Return Jittor initialized Var by invariant_uniform.

Args:
shape (int or tuple of int):

shape of the output Var

dtype (string):

dtype of the output Var, default float32

mode (string):

mode selection, should be fan_in or fan_out. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass.

Example:

from jittor import init
from jittor import nn
a = init.invariant_uniform_((2,2))
print(a)
jittor.init.invariant_uniform_(var, mode='fan_in')[源代码]

Inplace initialize Jittor Var by invariant_uniform.

Args:
var (Jittor Var):

Var to be initialized by random invariant_uniform

mode (string):

mode selection, should be fan_in or fan_out. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.invariant_uniform_(linear.weight)
print(linear.weight)
linear.weight.invariant_uniform_() # This is ok too
jittor.init.kaiming_normal_(var, a=0, mode='fan_in', nonlinearity='leaky_relu')[源代码]

Inplace initialize Jittor Var by kaiming_normal.

Args:
var (Jittor Var):

Var to be initialized by random kaiming_normal

a (float):

the negative slope of the rectifier used after this layer (only used with ‘leaky_relu’)

mode (string):

mode selection, should be fan_in or fan_out. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass.

nonlinearity (string):

nonlinearity used after this layer. It can be one of [linear, conv*, sigmoid, tanh, relu, leaky_relu]. leaky_relu is used by default.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.kaiming_normal_(linear.weight)
print(linear.weight)
linear.weight.kaiming_normal_() # This is ok too
jittor.init.kaiming_uniform_(var, a=0, mode='fan_in', nonlinearity='leaky_relu')[源代码]

Inplace initialize Jittor Var by kaiming_uniform.

Args:
var (Jittor Var):

Var to be initialized by random kaiming_uniform

a (float):

the negative slope of the rectifier used after this layer (only used with ‘leaky_relu’)

mode (string):

mode selection, should be fan_in or fan_out. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass.

nonlinearity (string):

nonlinearity used after this layer. It can be one of [linear, conv*, sigmoid, tanh, relu, leaky_relu]. leaky_relu is used by default.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.kaiming_uniform_(linear.weight)
print(linear.weight)
linear.weight.kaiming_uniform_() # This is ok too
jittor.init.one(shape, dtype='float32')[源代码]

Generate Jittor Var filled by one.

Args:
shape (int or tuple of int):

shape of the output Var

dtype (string):

dtype of the output Var, default float32

Return:

A Jittor Var which filled by one.

Example:

from jittor import init
print(init.one(2))
# output: [1.,1.]
print(init.one((2,3)))
# output: [[1.,1.,1.],[1.,1.,1.]]
jittor.init.one_(var)[源代码]

Inplace initialize variable with one.

Args:
var (Jittor Var):

Var to initialize with one.

Return:

var itself.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.one_(linear.weight)
print(linear.weight)
# output: [[1.,1.],[1.,1.]]
linear.weight.one_() # This is ok too
jittor.init.relu_invariant_gauss(shape, dtype='float32', mode='fan_in')[源代码]

Return Jittor Var initialized by relu_invariant_gauss.

Args:
shape (int or tuple of int):

shape of the output Var

dtype (string):

dtype of the output Var, default float32

mode (string):

mode selection, should be fan_in or fan_out. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass.

Example:

from jittor import init
from jittor import nn
a = init.relu_invariant_gauss((2,2))
print(a)
jittor.init.relu_invariant_gauss_(var, mode='fan_in')[源代码]

Inplace initialize Jittor Var by relu_invariant_gauss.

Args:
var (Jittor Var):

Var to be initialized by random relu_invariant_gauss

mode (string):

mode selection, should be fan_in or fan_out. Choosing ‘fan_in’ preserves the magnitude of the variance of the weights in the forward pass. Choosing ‘fan_out’ preserves the magnitudes in the backwards pass.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.relu_invariant_gauss_(linear.weight)
print(linear.weight)
linear.weight.relu_invariant_gauss_() # This is ok too
jittor.init.trunc_normal_(var: jittor_core.jittor_core.Var, mean: float = 0.0, std: float = 1.0, a: float = - 2.0, b: float = 2.0) jittor_core.jittor_core.Var[源代码]

Fills the input jt.jittor_core.Var with values drawn from a truncated normal distribution. The values are effectively drawn from the normal distribution \(\mathcal{N}(\text{mean}, \text{std}^2)\) with values outside \([a, b]\) redrawn until they are within the bounds. The method used for generating the random values works best when \(a \leq \text{mean} \leq b\). Args:

var: an n-dimensional jt.jittor_core.Var mean: the mean of the normal distribution std: the standard deviation of the normal distribution a: the minimum cutoff value b: the maximum cutoff value

Examples:

from jittor import init from jittor import nn linear = nn.Linear(2,2) init.trunc_normal_(linear.weight, std=.02) print(linear.weight) linear.weight.trunc_normal_(std=.02) # This is ok too

jittor.init.uniform(shape, dtype='float32', low=0, high=1)[源代码]

Generate random uniform Jittor Var.

Args:
shape (int or tuple of int):

shape of the output Var

dtype (string):

dtype of the output Var, default float32

low (int or float or Var):

lower bound value of the random uniform

high (int or float or Var):

upper bound value of the random uniform

Return:

A Jittor Var which filled by random uniform.

Example:

from jittor import init
print(init.uniform(5))
# output: [0.202268, 0.518688, 0.595274, 0.777354, 0.981979]
print(init.uniform((2,3), low=-1, high=1))
# output: [[ 0.6647397   0.2801202  -0.01981187]
#          [-0.9779438  -0.30149996  0.69056886]]
jittor.init.uniform_(var, low=0, high=1)[源代码]

Inplace initialize Jittor Var by random uniform.

Args:
var (Jittor Var):

Var to be initialized by random uniform

low (int or float or Var):

lower bound value of the random uniform

high (int or float or Var):

upper bound value of the random uniform

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.uniform_(linear.weight, -1.0, 1.0)
print(linear.weight)
# output: [[ 0.6647397   0.2801202], [-0.9779438  -0.30149996]]
linear.weight.uniform_(-1.0, 1.0) # This is ok too
jittor.init.xavier_gauss(shape, dtype='float32', gain=1.0)[源代码]
Return Jittor Var initialized by xavier_gauss, a.k.a xavier_normal.

The resulting var will have values sampled from \(gauss(-a, a)\) where

\[ext{std} = ext{gain} imes \sqrt{\]

rac{2}{ ext{fan_in} + ext{fan_out}}}

Args:
shape (int or tuple of int):

shape of the return Var.

dtype (string):

dtype of the return Var, default float32.

gain (float):

an optional scaling factor.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.xavier_gauss_(linear.weight, init.calculate_gain('relu'))
print(linear.weight)
linear.weight.xavier_gauss_() # This is ok too
jittor.init.xavier_gauss_(var, gain=1.0)[源代码]
Inplace initialize Jittor Var by xavier_gauss, a.k.a xavier_normal.

The resulting var will have values sampled from \(gauss(-a, a)\) where

\[ext{std} = ext{gain} imes \sqrt{\]

rac{2}{ ext{fan_in} + ext{fan_out}}}

Args:
var (Jittor Var):

Var to be initialized by random xavier_gauss

gain (float):

an optional scaling factor.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.xavier_gauss_(linear.weight, init.calculate_gain('relu'))
print(linear.weight)
linear.weight.xavier_gauss_() # This is ok too
jittor.init.xavier_uniform(shape, dtype='float32', gain=1.0)[源代码]
Inplace initialize Jittor Var by xavier_uniform.

The resulting var will have values sampled from \(uniform(-a, a)\) where

\[a = ext{gain} imes \sqrt{\]

rac{6}{ ext{fan_in} + ext{fan_out}}}

Args:
shape (int or tuple of int):

shape of the return Var.

dtype (string):

dtype of the return Var, default float32.

gain (float):

an optional scaling factor.

Example:

from jittor import init
from jittor import nn
a = init.xavier_uniform((2,2), gain=init.calculate_gain('relu'))
print(a)
jittor.init.xavier_uniform_(var, gain=1.0)[源代码]
Inplace initialize Jittor Var by xavier_uniform.

The resulting var will have values sampled from \(uniform(-a, a)\) where

\[a = ext{gain} imes \sqrt{\]

rac{6}{ ext{fan_in} + ext{fan_out}}}

Args:
var (Jittor Var):

Var to be initialized by random xavier_uniform

gain (float):

an optional scaling factor.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.xavier_uniform_(linear.weight, init.calculate_gain('relu'))
print(linear.weight)
linear.weight.xavier_uniform_() # This is ok too
jittor.init.zero(shape, dtype='float32')[源代码]

Generate zero Jittor Var.

Args:
shape (int or tuple of int):

shape of the output Var

dtype (string):

dtype of the output Var, default float32

Return:

A Jittor Var which filled by constant value.

Example:

from jittor import init
print(init.zero(2))
# output: [0.,0.]
print(init.zero((2,3)))
# output: [[0.,0.,0.],[0.,0.,0.]]
jittor.init.zero_(var)[源代码]

Inplace initialize variable with zero.

Args:
var (Jittor Var):

Var to initialize with zero.

Return:

var itself.

Example:

from jittor import init
from jittor import nn
linear = nn.Linear(2,2)
init.zero_(linear.weight)
print(linear.weight)
# output: [[0.,0.],[0.,0.]]
linear.weight.zero_() # This is ok too