jittor.nn

这里是Jittor的神经网络模块的API文档,您可以通过from jittor import nn来获取该模块。

class jittor.nn.BCELoss(weight=None, size_average=True)[源代码]
execute(output, target)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.BCEWithLogitsLoss(weight=None, pos_weight=None, size_average=True)[源代码]
execute(output, target)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.BatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, is_train=True, sync=True)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.BatchNorm1d

alias of jittor.nn.BatchNorm

jittor.nn.BatchNorm2d

alias of jittor.nn.BatchNorm

jittor.nn.BatchNorm3d

alias of jittor.nn.BatchNorm

class jittor.nn.Bilinear(in1_features, in2_features, out_features, bias=True, dtype='float32')[源代码]

bilinear transformation $out = in1^T W in2 + bias$, Example:

m = nn.Bilinear(20, 30, 40) input1 = jt.randn(128, 20) input2 = jt.randn(128, 30) output = m(input1, input2) print(output.shape) # [128, 40]

execute(in1, in2)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.ComplexNumber(real: jittor_core.jittor_core.Var, imag: Optional[jittor_core.jittor_core.Var] = None, is_concat_value=False)[源代码]

Applys Complex number class.

It’s saved as jt.stack(real, imag, dim=-1)

You can construct ComplexNumber with real part and imaginary part like ComplexNumber(real, imag) or real part only with ComplexNumber(real) or value after jt.stack with ComplexNumber(value, is_concat_value=True)

add, sub, mul and truediv between ComplexNumber and ComplexNumber, jt.Var, int, float are implemented

You can use ‘shape’, ‘reshape’ etc. as jt.Var

Example:
>>> real = jt.array([[[1., -2., 3.]]])
>>> imag = jt.array([[[0., 1., 6.]]])
>>> a = ComplexNumber(real, imag)
>>> a + a
>>> a / a
>>> a.norm()                # sqrt(real^2+imag^2)
>>> a.exp()                 # e^real(cos(imag)+isin(imag))
>>> a.conj()                # ComplexNumber(real, -imag)
>>> a.fft2()                # cuda only now. len(real.shape) equals 3
>>> a.ifft2()               # cuda only now. len(real.shape) equals 3
>>> a = jt.array([[1,1],[1,-1]])
>>> b = jt.array([[0,-1],[1,0]])
>>> c = ComplexNumber(a,b) / jt.sqrt(3)
>>> c @ c.transpose().conj()
ComplexNumber(real=jt.Var([[0.99999994 0.        ]
        [0.         0.99999994]], dtype=float32), imag=jt.Var([[0. 0.]
        [0. 0.]], dtype=float32))
conj()[源代码]
detach()[源代码]
exp()[源代码]
fft2()[源代码]
ifft2()[源代码]
property imag
norm()[源代码]
permute(*axes)[源代码]
property real
reshape(shape)[源代码]
property shape
squeeze(dim=0)[源代码]
start_grad()[源代码]
stop_grad()[源代码]
transpose(*axes)[源代码]
unsqueeze(dim=0)[源代码]
class jittor.nn.ConstantPad2d(padding, value)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Conv(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[源代码]

Applies a 2D convolution over an input signal composed of several input planes.

参数
  • in_channels (int) – Number of channels in the input feature map

  • out_channels (int) – Number of channels in the output feature map

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True

Example:

>>> conv = nn.Conv2d(24, 32, 3)
>>> conv = nn.Conv2d(24, 32, (3,3))
>>> conv = nn.Conv2d(24, 32, 3, stride=2, padding=1)
>>> conv = nn.Conv2d(24, 32, 3, dilation=(3, 1))
>>> input = jt.randn(4, 24, 100, 100)
>>> output = conv(input)
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[源代码]

Applies a 1D convolution over an input signal composed of several input planes.

参数
  • in_channels (int) – Number of channels in the input feature map

  • out_channels (int) – Number of channels in the output feature map

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True

Example:

>>> conv = nn.Conv1d(24, 32, 3)
>>> conv = nn.Conv1d(24, 32, (3,3))
>>> conv = nn.Conv1d(24, 32, 3, stride=2, padding=1)
>>> conv = nn.Conv1d(24, 32, 3, dilation=(3, 1))
>>> input = jt.randn(4, 24, 100)
>>> output = conv(input)
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Conv1d_sp(inchannels, outchannels, kernel_size=1, bias=True)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.Conv2d

alias of jittor.nn.Conv

class jittor.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[源代码]

Applies a 3D convolution over an input signal composed of several input planes.

参数
  • in_channels (int) – Number of channels in the input feature map

  • out_channels (int) – Number of channels in the output feature map

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

  • bias (bool, optional) – If True, adds a learnable bias to the output. Default: True

Example:

>>> conv = nn.Conv3d(24, 32, 3)
>>> conv = nn.Conv3d(24, 32, (3,3))
>>> conv = nn.Conv3d(24, 32, 3, stride=2, padding=1)
>>> conv = nn.Conv3d(24, 32, 3, dilation=(3, 1))
>>> input = jt.randn(4, 24, 50, 50, 50)
>>> output = conv(input)
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.ConvTranspose(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.ConvTranspose2d

alias of jittor.nn.ConvTranspose

class jittor.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.CrossEntropyLoss(weight=None, ignore_index=None)[源代码]
execute(output, target)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.DropPath(p=0.5, is_train=False)[源代码]

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Dropout(p=0.5, is_train=False)[源代码]
execute(input)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Dropout2d(p=0.5, is_train=False)[源代码]
execute(input)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.ELU(alpha=1.0)[源代码]

Applies the element-wise function:

\[\begin{split}\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}\end{split}\]
参数
  • x (jt.Var) – the input var

  • alpha – the \(\alpha\) value for the ELU formulation. Default: 1.0

  • alpha – float, optional

Example:
>>> a = jt.randn(3)
>>> a
jt.Var([-0.38380373 -1.1338731   2.128115  ], dtype=float32)
>>> nn.elu(a)
jt.Var([-0.31873488 -0.6782155   2.128115  ], dtype=float32)
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, dtype='float32')[源代码]

A simple lookup table that stores embeddings of a fixed dictionary and size.

参数
  • num (int) – size of the dictionary of embeddings

  • dim (int) – the size of each embedding vector

Example:
>>> embedding = nn.Embedding(10, 3)
>>> x = jt.int32([1, 2, 3, 3])
>>> embedding(x)
jt.Var([[ 1.1128596   0.19169547  0.706642]
 [ 1.2047412   1.9668795   0.9932192]
 [ 0.14941819  0.57047683 -1.3217674]
 [ 0.14941819  0.57047683 -1.3217674]], dtype=float32)
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Flatten(start_dim=1, end_dim=- 1)[源代码]

Flattens the contiguous range of dimensions in a Var.

参数
  • start_dim (int) – the first dimension to be flattened. Defaults: 1.

  • end_dim (int) – the last dimension to be flattened. Defaults: -1.

execute(x) jittor_core.jittor_core.Var[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.GRU(input_size: int, hidden_size: int, num_layers: int = 1, bias: bool = True, batch_first: bool = False, dropout: float = 0, bidirectional: bool = False)[源代码]

Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.

参数
  • input_size (int) – The number of expected features in the input.

  • hidden_size (int) – The number of features in the hidden state.

  • num_layers (int, optinal) – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1

  • bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.

  • batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False

  • dropout (float, optional) – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Default: 0

  • bidirectional (bool, optional) – If True, becomes a bidirectional GRU. Default: False

Example:
>>> rnn = nn.GRU(10, 20, 2)
>>> input = jt.randn(5, 3, 10)
>>> h0 = jt.randn(2, 3, 20)
>>> output, hn = rnn(input, h0)
call_rnn_cell(input, hidden, suffix)[源代码]
class jittor.nn.GRUCell(input_size, hidden_size, bias=True)[源代码]

A gated recurrent unit (GRU) cell.

参数
  • input_size (int) – The number of expected features in the input

  • hidden_size (int) – The number of features in the hidden state

  • bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.

Example:

>>> rnn = nn.GRUCell(10, 20)
>>> input = jt.randn((6, 3, 10))
>>> hx = jt.randn((3, 20))
>>> output = []
>>> for i in range(6):
        hx = rnn(input[i], hx)
        output.append(hx)
execute(input, hx=None)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True, is_train=True)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Identity(*args, **kwargs)[源代码]
execute(input)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.InstanceNorm(num_features, eps=1e-05, momentum=0.1, affine=True, is_train=True, sync=True)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.InstanceNorm1d

alias of jittor.nn.InstanceNorm

jittor.nn.InstanceNorm2d

alias of jittor.nn.InstanceNorm

jittor.nn.InstanceNorm3d

alias of jittor.nn.InstanceNorm

class jittor.nn.KLDivLoss(reduction: str = 'mean', log_target: bool = False)[源代码]

Computes the Kullback-Leibler divergence loss.

execute(input: jittor_core.jittor_core.Var, target: jittor_core.jittor_core.Var) jittor_core.jittor_core.Var[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.L1Loss[源代码]
execute(output, target)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.LSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0, bidirectional=False, proj_size=0)[源代码]

Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.

参数
  • input_size (int) – The number of expected features in the input.

  • hidden_size (int) – The number of features in the hidden state.

  • num_layers (int, optinal) – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1

  • bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.

  • batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False

  • dropout (float, optional) – If non-zero, introduces a Dropout layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to dropout. Default: 0

  • bidirectional (bool, optional) – If True, becomes a bidirectional LSTM. Default: False

  • proj_size (int, optional) – If > 0, will use LSTM with projections of corresponding size. Default: 0

Example:
>>> rnn = nn.LSTM(10, 20, 2)
>>> input = jt.randn(5, 3, 10)
>>> h0 = jt.randn(2, 3, 20)
>>> c0 = jt.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
call_rnn_cell(input, hidden, suffix)[源代码]
class jittor.nn.LSTMCell(input_size, hidden_size, bias=True)[源代码]

A long short-term memory (LSTM) cell.

参数
  • input_size (int) – The number of expected features in the input

  • hidden_size (int) – The number of features in the hidden state

  • bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.

Example:

>>> rnn = nn.LSTMCell(10, 20) # (input_size, hidden_size)
>>> input = jt.randn(2, 3, 10) # (time_steps, batch, input_size)
>>> hx = jt.randn(3, 20) # (batch, hidden_size)
>>> cx = jt.randn(3, 20)
>>> output = []
>>> for i in range(input.shape[0]):
        hx, cx = rnn(input[i], (hx, cx))
        output.append(hx)
>>> output = jt.stack(output, dim=0)
execute(input, hx=None)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.LayerNorm(normalized_shape, eps: float = 1e-05, elementwise_affine: bool = True)[源代码]
execute(**kw)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.LayerNorm1d

alias of jittor.nn.LayerNorm

jittor.nn.LayerNorm2d

alias of jittor.nn.LayerNorm

jittor.nn.LayerNorm3d

alias of jittor.nn.LayerNorm

class jittor.nn.Linear(in_features, out_features, bias=True)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.MSELoss(reduction='mean')[源代码]
execute(output, target)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Mish(inplace=False)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.ModuleList

alias of jittor.nn.Sequential

class jittor.nn.PReLU(num_parameters=1, init_=0.25)[源代码]

Applies the element-wise function:

\[\begin{split}\text{PReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ ax, & \text{ otherwise } \end{cases}\end{split}\]
参数
  • x (jt.Var) – the input var

  • num_parameters (int, optional) – number of \(a\) to learn, can be either 1 or the number of channels at input. Default: 1

  • init – the initial value of \(a\). Default: 0.25

  • init – float, optional

Example:
>>> a = jt.randn(3)
>>> prelu = nn.PReLU()
>>> prelu(a)
jt.Var([-0.09595093  1.1338731   6.128115  ], dtype=float32)
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.Parameter(data, requires_grad=True)[源代码]

The Parameter interface isn’t needed in Jittor, this interface does nothings and it is just used for compatible.

A Jittor Var is a Parameter when it is a member of Module, if you don’t want a Jittor Var menber is treated as a Parameter, just name it startswith underscore _.

jittor.nn.ParameterDict

alias of jittor.nn.ParameterList

class jittor.nn.ParameterList(*args)[源代码]
add_param(name, var)[源代码]
append(var)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

items()[源代码]
keys()[源代码]
values()[源代码]
class jittor.nn.PixelShuffle(upscale_factor)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.RNN(input_size: int, hidden_size: int, num_layers: int = 1, nonlinearity: str = 'tanh', bias: bool = True, batch_first: bool = False, dropout: float = 0, bidirectional: bool = False)[源代码]

Applies a multi-layer Elman RNN with tanh ReLU non-linearity to an input sequence.

参数
  • input_size (int) – The number of expected features in the input.

  • hidden_size (int) – The number of features in the hidden state.

  • num_layers (int, optinal) – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1

  • nonlinearity (str, optional) – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’

  • bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.

  • batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False

  • dropout (float, optional) – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout. Default: 0

  • bidirectional (bool, optional) – If True, becomes a bidirectional RNN. Default: False

Example:
>>> rnn = nn.RNN(10, 20, 2)
>>> input = jt.randn(5, 3, 10)
>>> h0 = jt.randn(2, 3, 20)
>>> output, hn = rnn(input, h0)
call_rnn_cell(input, hidden, suffix)[源代码]
class jittor.nn.RNNBase(mode: str, input_size: int, hidden_size: int, num_layers: int = 1, bias: bool = True, batch_first: bool = False, dropout: float = 0, bidirectional: bool = False, proj_size: int = 0, nonlinearity: Optional[str] = None)[源代码]
abstract call_rnn_cell(input, hidden, suffix)[源代码]
call_rnn_sequence(input, hidden, suffix)[源代码]
execute(input, hx=None)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh')[源代码]

An Elman RNN cell with tanh or ReLU non-linearity.

参数
  • input_size (int) – The number of expected features in the input

  • hidden_size (int) – The number of features in the hidden state

  • bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.

  • nonlinearity (str, optional) – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’.

Example:

>>> rnn = nn.RNNCell(10, 20)
>>> input = jt.randn((6, 3, 10))
>>> hx = jt.randn((3, 20))
>>> output = []
>>> for i in range(6):
        hx = rnn(input[i], hx)
        output.append(hx)
execute(input, hx=None)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.ReflectionPad2d(padding)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.ReplicationPad2d(padding)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Resize(size, mode='nearest', align_corners=False)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Sequential(*args)[源代码]
add_module(name, mod)[源代码]
append(mod)[源代码]
dfs(parents, k, callback, callback_leave, recurse=True)[源代码]

An utility function to traverse the module.

execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

items()[源代码]
keys()[源代码]
named_children()[源代码]
values()[源代码]
class jittor.nn.Sigmoid[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Softplus(beta=1, threshold=20)[源代码]

SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.

Args:

[in] beta (float): the beta value for the Softplus formulation. Default: 1.

[in] threshold (float): values above this revert to a linear function. Default: 20.

execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Tanh[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Upsample(scale_factor=None, mode='nearest')[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.UpsamplingBilinear2d(scale_factor=None)[源代码]
class jittor.nn.UpsamplingNearest2d(scale_factor=None)[源代码]
class jittor.nn.ZeroPad2d(padding)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.affine_grid(theta, size, align_corners=False)[源代码]
jittor.nn.affine_grid_generator_4D(theta, N, C, H, W, align_corners)[源代码]
jittor.nn.affine_grid_generator_5D(theta, N, C, D, H, W, align_corners)[源代码]
jittor.nn.backward(v, *args, **kw)[源代码]

The backward variable interface doesn’t exist in Jittor. please use optimizer.backward(loss) or optimizer.step(loss) instead. For example, if your code looks like this:

optimizer.zero_grad()
loss.backward()
optimizer.step()

It can be changed to this:

optimizer.zero_grad()
optimizer.backward(loss)
optimizer.step()

Or more concise:

optimizer.step(loss)

The step function will automatically zero grad and backward.

jittor.nn.baddbmm(input, batch1, batch2, beta=1, alpha=1)[源代码]
jittor.nn.batch_norm(x, running_mean, running_var, weight=1, bias=0, training=False, momentum=0.1, eps=1e-05)[源代码]
jittor.nn.bce_loss(output, target, weight=None, size_average=True)[源代码]
jittor.nn.bilinear(in1, in2, weight, bias)[源代码]
jittor.nn.binary_cross_entropy_with_logits(output, target, weight=None, pos_weight=None, size_average=True)[源代码]
jittor.nn.bmm(a, b)[源代码]

batch matrix multiply, shape of input a is [batch, n, m], shape of input b is [batch, m, k], return shape is [batch, n, k]

Example:

import jittor as jt
from jittor import nn

batch, n, m, k = 100, 5, 6, 7

a = jt.random((batch, n, m))
b = jt.random((batch, m, k))
c = nn.bmm(a, b)
jittor.nn.bmm_transpose(a, b)[源代码]

returns a * b^T

jittor.nn.clip_coordinates(x, clip_limit)[源代码]
jittor.nn.conv(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

Applies a 2D convolution over an input signal composed of several input planes.

参数
  • x (jt.Var) – the input image

  • weight (jt.Var) – the convolution kernel

  • bias (jt,Var, optional) – the bias after convolution

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

Example:

>>> x = jt.randn(4, 24, 100, 100)
>>> w = jt.randn(32, 24, 3, 3)
>>> y = nn.conv2d(x, w)
jittor.nn.conv2d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[源代码]

Applies a 2D convolution over an input signal composed of several input planes.

参数
  • x (jt.Var) – the input image

  • weight (jt.Var) – the convolution kernel

  • bias (jt,Var, optional) – the bias after convolution

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

Example:

>>> x = jt.randn(4, 24, 100, 100)
>>> w = jt.randn(32, 24, 3, 3)
>>> y = nn.conv2d(x, w)
jittor.nn.conv3d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[源代码]

Applies a 3D convolution over an input signal composed of several input planes.

参数
  • x (jt.Var) – the input volume

  • weight (jt.Var) – the convolution kernel

  • bias (jt,Var, optional) – the bias after convolution

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

Example:

>>> x = jt.randn(4, 24, 50, 50, 50)
>>> w = jt.randn(32, 24, 3, 3, 3)
>>> y = nn.conv2d(x, w)
jittor.nn.conv_transpose(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[源代码]
jittor.nn.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)
jittor.nn.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[源代码]
jittor.nn.cross_entropy_loss(output, target, weight=None, ignore_index=None, reduction='mean')[源代码]
jittor.nn.dropout(x, p=0.5, is_train=False)[源代码]
jittor.nn.dropout2d(x, p=0.5, is_train=False)[源代码]
jittor.nn.droppath(x, p=0.5, is_train=False)[源代码]
jittor.nn.elu(x: jittor_core.jittor_core.Var, alpha: float = 1.0) jittor_core.jittor_core.Var[源代码]

Applies the element-wise function:

\[\begin{split}\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}\end{split}\]
参数
  • x (jt.Var) – the input var

  • alpha – the \(\alpha\) value for the ELU formulation. Default: 1.0

  • alpha – float, optional

Example:
>>> a = jt.randn(3)
>>> a
jt.Var([-0.38380373 -1.1338731   2.128115  ], dtype=float32)
>>> nn.elu(a)
jt.Var([-0.31873488 -0.6782155   2.128115  ], dtype=float32)
jittor.nn.embedding(input, weight)[源代码]
jittor.nn.fold(X, output_size, kernel_size, dilation=1, padding=0, stride=1)[源代码]
jittor.nn.fp32_guard(func)[源代码]
jittor.nn.gelu(x)[源代码]

Applies the element-wise function:

\[\text{GELU}(x) = x * \Phi(x)\]

where \(\Phi(x)\) is the Cumulative Distribution Function for Gaussian Distribution.

参数

x (jt.Var) – the input var

Example:
>>> a = jt.randn(3)
>>> a
jt.Var([-0.38380373 -1.1338731   2.128115  ], dtype=float32)
>>> nn.gelu(a)
jt.Var([-0.134547   0.9882567  6.128115 ], dtype=float32)
jittor.nn.get_init_var_rand(shape, dtype)[源代码]
jittor.nn.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=False)[源代码]
jittor.nn.grid_sample_v0(input, grid, mode='bilinear', padding_mode='zeros')[源代码]

Given an input and a flow-field grid, computes the output using input values and pixel locations from grid.

grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of [-1, 1]. For example, values x = -1, y = -1 is the left-top pixel of input, and values x = 1, y = 1 is the right-bottom pixel of input.

Args:

[in] input (var): the source input var, whose shape is (N, C, Hi, Wi)

[in] grid (var): the pixel locations, whose shape is (N, Ho, Wo, 2)

[in] mode (string): the interpolate way, default: bilinear.

[in] padding_mode (string): the padding way, default: zeros.

[out] output (var): the output var, whose shape is (N, C, Ho, Wo)

Example:

>>> x = jt.array([[[[1,2],[3,4]]]])
>>> print(x)
[[[[1 2]
[3 4]]]] 
>>> grid = jt.array([[[[0.5, 0.5]]]])
>>> print(x.shape, grid.shape)
[1,1,2,2,], [1,1,2,2,]
>>> nn.grid_sample(x, grid)
[[[[3.25]]]]
jittor.nn.grid_sampler(X, grid, mode, padding_mode, align_corners)[源代码]
jittor.nn.grid_sampler_2d(X, grid, mode, padding_mode, align_corners)[源代码]
jittor.nn.grid_sampler_3d(X, grid, mode, padding_mode, align_corners)[源代码]
jittor.nn.grid_sampler_compute_source_index(coord, size, padding_mode, align_corners)[源代码]
jittor.nn.grid_sampler_unnormalize(coord, size, align_corners)[源代码]
jittor.nn.group_norm(x, num_groups, weight=1, bias=0, eps=1e-05)[源代码]
jittor.nn.hardtanh(x, min_val=- 1, max_val=1)[源代码]
jittor.nn.identity(input)[源代码]
jittor.nn.instance_norm(x, running_mean=None, running_var=None, weight=1, bias=0, momentum=0.1, eps=1e-05)[源代码]
jittor.nn.interpolate(X, size=None, scale_factor=None, mode='bilinear', align_corners=False, tf_mode=False)[源代码]
jittor.nn.l1_loss(output, target)[源代码]
jittor.nn.layer_norm(*args, **kw)[源代码]
jittor.nn.leaky_relu(x, scale=0.01)[源代码]

Applies the element-wise function:

\[\begin{split}\text{LeakyRELU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \text{scale} \times x, & \text{ otherwise } \end{cases}\end{split}\]
参数
  • x (jt.Var) – the input var

  • scale – the \(\scale\) value for the leaky relu formulation. Default: 0.01

  • scale – float, optional

Example:
>>> a = jt.randn(3)
>>> a
jt.Var([-0.38380373 1.1338731   6.128115  ], dtype=float32)
>>> nn.leaky_relu(a)
jt.Var([-3.8380371e-03  1.1338731e+00  6.1281152e+00], dtype=float32)
jittor.nn.linear(x, weight, bias=None)[源代码]

Returns x * weight^T

jittor.nn.linspace_from_neg_one(grid, num_steps, align_corners)[源代码]
jittor.nn.log_sigmoid(x)[源代码]
jittor.nn.log_softmax(x, dim=None)[源代码]
jittor.nn.logsumexp(x, dim, keepdims=False, keepdim=False)[源代码]
jittor.nn.make_base_grid_4D(theta, N, C, H, W, align_corners)[源代码]
jittor.nn.make_base_grid_5D(theta, N, C, D, H, W, align_corners)[源代码]
jittor.nn.matmul(a, b)[源代码]

matrix multiply,

Example:

a = jt.random([3])
b = jt.random([3])
c = jt.matmul(a, b)
assert c.shape == [1]

a = jt.random([3, 4])
b = jt.random([4])
c = jt.matmul(a, b)
assert c.shape == [3]

a = jt.random([10, 3, 4])
b = jt.random([4])
c = jt.matmul(a, b)
assert c.shape == [10, 3]

a = jt.random([10, 3, 4])
b = jt.random([4, 5])
c = jt.matmul(a, b)
assert c.shape == [10, 3, 5]

a = jt.random([10, 3, 4])
b = jt.random([10, 4, 5])
c = jt.matmul(a, b)
assert c.shape == [10, 3, 5]

a = jt.random([8, 1, 3, 4])
b = jt.random([10, 4, 5])
c = jt.matmul(a, b)
assert c.shape == [8, 10, 3, 5]
jittor.nn.matmul_transpose(a, b)[源代码]

returns a * b^T

jittor.nn.mish(x, inplace=False)[源代码]
jittor.nn.mse_loss(output, target, reduction='mean')[源代码]
jittor.nn.nll_loss(output, target, weight=None, ignore_index=- 100, reduction='mean')[源代码]
jittor.nn.one_hot(x: jittor_core.jittor_core.Var, num_classes: int = - 1) jittor_core.jittor_core.Var[源代码]

Returns the one_hot encoding of inputs.

参数
  • x (jt.Var with bool or integer dtype) – class values of any shape

  • num_classes (int, optional) – Total number of classes. If set to -1, the number of classes will be inferred as one greater than the largest class value in the input tensor.

返回

a Var with one more dimension with 1 values at the index

of last dimension indicated by the input, and 0 everywhere else. :rtype: jt.Var

注解

if the values in x are greater than num_class or less than 0, the returned one_hot will be all zeros.

Example:
>>> jt.nn.one_hot(jt.arange(5) % 3)
    jt.Var([[1 0 0]
        [0 1 0]
        [0 0 1]
        [1 0 0]
        [0 1 0]], dtype=int32)
>>> jt.nn.one_hot(jt.arange(5) % 3, num_classes=5)
    jt.Var([[1 0 0 0 0]
        [0 1 0 0 0]
        [0 0 1 0 0]
        [1 0 0 0 0]
        [0 1 0 0 0]], dtype=int32)
>>> jt.nn.one_hot(jt.arange(6).reshape(3,2) % 3)
    jt.Var([[[1 0 0]
        [0 1 0]]

[[0 0 1] [1 0 0]]

[[0 1 0] [0 0 1]]], dtype=int32)

jittor.nn.pad(x, padding, mode='constant', value=0)[源代码]
jittor.nn.reflect_coordinates(x, twice_low, twice_high)[源代码]
jittor.nn.relu(x)[源代码]

Applies the element-wise function:

\[\text{ReLU6}(x) = \max(0,x)\]
参数

x (jt.Var) – the input var

Example:
>>> a = jt.randn(3)
>>> a
jt.Var([-0.38380373 1.1338731   6.128115  ], dtype=float32)
>>> nn.relu(a)
jt.Var([0.        1.1338731 6.128115 ], dtype=float32)
jittor.nn.relu6(x)[源代码]

Applies the element-wise function:

\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]
参数

x (jt.Var) – the input var

Example:
>>> a = jt.randn(3)
>>> a
jt.Var([-0.38380373 1.1338731   6.128115  ], dtype=float32)
>>> nn.relu6(a)
jt.Var([0.        1.1338731 6.       ], dtype=float32)
jittor.nn.resize(img, size, mode='nearest', align_corners=False, tf_mode=False)[源代码]
jittor.nn.sign(x: jittor_core.jittor_core.Var) jittor_core.jittor_core.Var[源代码]

returns the signs of elements of x

参数

x (jt.Var) – the input Var

Example:
>>> a = jt.float32([0.99, 0, -0.99])
>>> nn.sign(a)
jt.Var([ 1.  0. -1.], dtype=float32)
jittor.nn.silu(x)[源代码]

Applies the element-wise function:

\[\text{SILU}(x) = x * Sigmoid(x)\]
参数

x (jt.Var) – the input var

Example:
>>> a = jt.randn(3)
>>> a
jt.Var([-0.38380373 -1.1338731   2.128115  ], dtype=float32)
>>> nn.silu(a)
jt.Var([-0.15552104 -0.27603802  1.9016962 ], dtype=float32)
jittor.nn.skip_init(module_cls, *args, **kw)[源代码]
jittor.nn.smooth_l1_loss(y_true, y_pred, reduction='mean')[源代码]

Implements Smooth-L1 loss. y_true and y_pred are typically: [N, 4], but could be any shape.

Args:

y_true - ground truth y_pred - predictions reduction - the mode of cal loss which must be in [‘mean’,’sum’,’none’]

jittor.nn.softmax(x, dim=None, log=False)[源代码]
jittor.nn.softplus(x, beta=1.0, threshold=20.0)[源代码]
jittor.nn.unfold(X, kernel_size, dilation=1, padding=0, stride=1)[源代码]
jittor.nn.upsample(img, size, mode='nearest', align_corners=False, tf_mode=False)
class jittor.nn.AdaptiveAvgPool2d(output_size)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.AdaptiveAvgPool3d(output_size)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.AdaptiveMaxPool2d(output_size, return_indices=False)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.AdaptiveMaxPool2d(output_size, return_indices=False)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False)[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.MaxUnpool2d(kernel_size, stride=None)[源代码]

MaxUnpool2d is the invert version of MaxPool2d with indices. It takes the output index of MaxPool2d as input. The element will be zero if it is not the max pooled value.

Example:

>>> import jittor as jt
>>> from jittor import nn
>>> pool = nn.MaxPool2d(2, stride=2, return_indices=True)
>>> unpool = nn.MaxUnpool2d(2, stride=2)
>>> input = jt.array([[[[ 1.,  2,  3,  4,0],
                        [ 5,  6,  7,  8,0],
                        [ 9, 10, 11, 12,0],
                        [13, 14, 15, 16,0],
                        [0,  0,  0,  0, 0]]]])
>>> output, indices = pool(input)
>>> unpool(output, indices, output_size=input.shape)
jt.array([[[[   0.,  0.,   0.,   0.,   0.],
            [   0.,  6.,   0.,   8.,   0.],
            [   0.,  0.,   0.,   0.,   0.],
            [   0., 14.,   0.,  16.,   0.],
            [   0.,  0.,   0.,   0.,   0.]]]])
execute(x, id, output_size=None)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.MaxUnpool3d(kernel_size, stride=None)[源代码]

MaxUnpool3d is the invert version of MaxPool3d with indices. It takes the output index of MaxPool3d as input. The element will be zero if it is not the max pooled value.

execute(x, id, output_size=None)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Pool(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False, count_include_pad=True, op='maximum')[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

class jittor.nn.Pool3d(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False, count_include_pad=True, op='maximum')[源代码]
execute(x)[源代码]

Executes the module computation.

Raises NotImplementedError if the subclass does not override the method.

jittor.nn.avg_pool2d(x, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[源代码]
jittor.nn.max_pool2d(x, kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False)[源代码]
jittor.nn.max_pool3d(x, kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False)[源代码]
jittor.nn.pool(x, kernel_size, op, padding=0, stride=None)[源代码]
jittor.nn.pool3d(x, kernel_size, op, padding=0, stride=None)[源代码]
jittor.nn.ReLU

alias of jittor.make_module.<locals>.MakeModule

jittor.nn.ReLU6

alias of jittor.make_module.<locals>.MakeModule

jittor.nn.LeakyReLU

alias of jittor.make_module.<locals>.MakeModule

jittor.nn.Softmax

alias of jittor.make_module.<locals>.MakeModule