jittor.nn¶
这里是Jittor的神经网络模块的API文档,您可以通过from jittor import nn
来获取该模块。
- class jittor.nn.BatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, is_train=True, sync=True)[源代码]¶
- jittor.nn.BatchNorm1d¶
alias of
jittor.nn.BatchNorm
- jittor.nn.BatchNorm2d¶
alias of
jittor.nn.BatchNorm
- jittor.nn.BatchNorm3d¶
alias of
jittor.nn.BatchNorm
- class jittor.nn.Bilinear(in1_features, in2_features, out_features, bias=True, dtype='float32')[源代码]¶
bilinear transformation $out = in1^T W in2 + bias$, Example:
m = nn.Bilinear(20, 30, 40) input1 = jt.randn(128, 20) input2 = jt.randn(128, 30) output = m(input1, input2) print(output.shape) # [128, 40]
- class jittor.nn.ComplexNumber(real: jittor_core.jittor_core.Var, imag: Optional[jittor_core.jittor_core.Var] = None, is_concat_value=False)[源代码]¶
Applys Complex number class.
It’s saved as jt.stack(real, imag, dim=-1)
You can construct ComplexNumber with real part and imaginary part like ComplexNumber(real, imag) or real part only with ComplexNumber(real) or value after jt.stack with ComplexNumber(value, is_concat_value=True)
add, sub, mul and truediv between ComplexNumber and ComplexNumber, jt.Var, int, float are implemented
You can use ‘shape’, ‘reshape’ etc. as jt.Var
- Example:
>>> real = jt.array([[[1., -2., 3.]]]) >>> imag = jt.array([[[0., 1., 6.]]]) >>> a = ComplexNumber(real, imag) >>> a + a >>> a / a >>> a.norm() # sqrt(real^2+imag^2) >>> a.exp() # e^real(cos(imag)+isin(imag)) >>> a.conj() # ComplexNumber(real, -imag) >>> a.fft2() # cuda only now. len(real.shape) equals 3 >>> a.ifft2() # cuda only now. len(real.shape) equals 3
>>> a = jt.array([[1,1],[1,-1]]) >>> b = jt.array([[0,-1],[1,0]]) >>> c = ComplexNumber(a,b) / jt.sqrt(3) >>> c @ c.transpose().conj() ComplexNumber(real=jt.Var([[0.99999994 0. ] [0. 0.99999994]], dtype=float32), imag=jt.Var([[0. 0.] [0. 0.]], dtype=float32))
- property imag¶
- property real¶
- property shape¶
- class jittor.nn.Conv(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[源代码]¶
Applies a 2D convolution over an input signal composed of several input planes.
- 参数
in_channels (int) – Number of channels in the input feature map
out_channels (int) – Number of channels in the output feature map
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Example:
>>> conv = nn.Conv2d(24, 32, 3) >>> conv = nn.Conv2d(24, 32, (3,3)) >>> conv = nn.Conv2d(24, 32, 3, stride=2, padding=1) >>> conv = nn.Conv2d(24, 32, 3, dilation=(3, 1)) >>> input = jt.randn(4, 24, 100, 100) >>> output = conv(input)
- class jittor.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[源代码]¶
Applies a 1D convolution over an input signal composed of several input planes.
- 参数
in_channels (int) – Number of channels in the input feature map
out_channels (int) – Number of channels in the output feature map
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Example:
>>> conv = nn.Conv1d(24, 32, 3) >>> conv = nn.Conv1d(24, 32, (3,3)) >>> conv = nn.Conv1d(24, 32, 3, stride=2, padding=1) >>> conv = nn.Conv1d(24, 32, 3, dilation=(3, 1)) >>> input = jt.randn(4, 24, 100) >>> output = conv(input)
- jittor.nn.Conv2d¶
alias of
jittor.nn.Conv
- class jittor.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[源代码]¶
Applies a 3D convolution over an input signal composed of several input planes.
- 参数
in_channels (int) – Number of channels in the input feature map
out_channels (int) – Number of channels in the output feature map
kernel_size (int or tuple) – Size of the convolving kernel
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Example:
>>> conv = nn.Conv3d(24, 32, 3) >>> conv = nn.Conv3d(24, 32, (3,3)) >>> conv = nn.Conv3d(24, 32, 3, stride=2, padding=1) >>> conv = nn.Conv3d(24, 32, 3, dilation=(3, 1)) >>> input = jt.randn(4, 24, 50, 50, 50) >>> output = conv(input)
- class jittor.nn.ConvTranspose(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[源代码]¶
- jittor.nn.ConvTranspose2d¶
alias of
jittor.nn.ConvTranspose
- class jittor.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[源代码]¶
- class jittor.nn.DropPath(p=0.5, is_train=False)[源代码]¶
Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
- class jittor.nn.ELU(alpha=1.0)[源代码]¶
Applies the element-wise function:
\[\begin{split}\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}\end{split}\]- 参数
x (jt.Var) – the input var
alpha – the \(\alpha\) value for the ELU formulation. Default: 1.0
alpha – float, optional
- Example:
>>> a = jt.randn(3) >>> a jt.Var([-0.38380373 -1.1338731 2.128115 ], dtype=float32) >>> nn.elu(a) jt.Var([-0.31873488 -0.6782155 2.128115 ], dtype=float32)
- class jittor.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, dtype='float32')[源代码]¶
A simple lookup table that stores embeddings of a fixed dictionary and size.
- 参数
num (int) – size of the dictionary of embeddings
dim (int) – the size of each embedding vector
- Example:
>>> embedding = nn.Embedding(10, 3) >>> x = jt.int32([1, 2, 3, 3]) >>> embedding(x) jt.Var([[ 1.1128596 0.19169547 0.706642] [ 1.2047412 1.9668795 0.9932192] [ 0.14941819 0.57047683 -1.3217674] [ 0.14941819 0.57047683 -1.3217674]], dtype=float32)
- class jittor.nn.Flatten(start_dim=1, end_dim=- 1)[源代码]¶
Flattens the contiguous range of dimensions in a Var.
- 参数
start_dim (int) – the first dimension to be flattened. Defaults: 1.
end_dim (int) – the last dimension to be flattened. Defaults: -1.
- class jittor.nn.GRU(input_size: int, hidden_size: int, num_layers: int = 1, bias: bool = True, batch_first: bool = False, dropout: float = 0, bidirectional: bool = False)[源代码]¶
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
- 参数
input_size (int) – The number of expected features in the input.
hidden_size (int) – The number of features in the hidden state.
num_layers (int, optinal) – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1
bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout (float, optional) – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to dropout. Default: 0
bidirectional (bool, optional) – If True, becomes a bidirectional GRU. Default: False
- Example:
>>> rnn = nn.GRU(10, 20, 2) >>> input = jt.randn(5, 3, 10) >>> h0 = jt.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
- class jittor.nn.GRUCell(input_size, hidden_size, bias=True)[源代码]¶
A gated recurrent unit (GRU) cell.
- 参数
input_size (int) – The number of expected features in the input
hidden_size (int) – The number of features in the hidden state
bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.
Example:
>>> rnn = nn.GRUCell(10, 20) >>> input = jt.randn((6, 3, 10)) >>> hx = jt.randn((3, 20)) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
- class jittor.nn.InstanceNorm(num_features, eps=1e-05, momentum=0.1, affine=True, is_train=True, sync=True)[源代码]¶
- jittor.nn.InstanceNorm1d¶
alias of
jittor.nn.InstanceNorm
- jittor.nn.InstanceNorm2d¶
alias of
jittor.nn.InstanceNorm
- jittor.nn.InstanceNorm3d¶
alias of
jittor.nn.InstanceNorm
- class jittor.nn.KLDivLoss(reduction: str = 'mean', log_target: bool = False)[源代码]¶
Computes the Kullback-Leibler divergence loss.
- class jittor.nn.LSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0, bidirectional=False, proj_size=0)[源代码]¶
Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
- 参数
input_size (int) – The number of expected features in the input.
hidden_size (int) – The number of features in the hidden state.
num_layers (int, optinal) – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1
bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout (float, optional) – If non-zero, introduces a Dropout layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to dropout. Default: 0
bidirectional (bool, optional) – If True, becomes a bidirectional LSTM. Default: False
proj_size (int, optional) – If > 0, will use LSTM with projections of corresponding size. Default: 0
- Example:
>>> rnn = nn.LSTM(10, 20, 2) >>> input = jt.randn(5, 3, 10) >>> h0 = jt.randn(2, 3, 20) >>> c0 = jt.randn(2, 3, 20) >>> output, (hn, cn) = rnn(input, (h0, c0))
- class jittor.nn.LSTMCell(input_size, hidden_size, bias=True)[源代码]¶
A long short-term memory (LSTM) cell.
- 参数
input_size (int) – The number of expected features in the input
hidden_size (int) – The number of features in the hidden state
bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.
Example:
>>> rnn = nn.LSTMCell(10, 20) # (input_size, hidden_size) >>> input = jt.randn(2, 3, 10) # (time_steps, batch, input_size) >>> hx = jt.randn(3, 20) # (batch, hidden_size) >>> cx = jt.randn(3, 20) >>> output = [] >>> for i in range(input.shape[0]): hx, cx = rnn(input[i], (hx, cx)) output.append(hx) >>> output = jt.stack(output, dim=0)
- class jittor.nn.LayerNorm(normalized_shape, eps: float = 1e-05, elementwise_affine: bool = True)[源代码]¶
- jittor.nn.LayerNorm1d¶
alias of
jittor.nn.LayerNorm
- jittor.nn.LayerNorm2d¶
alias of
jittor.nn.LayerNorm
- jittor.nn.LayerNorm3d¶
alias of
jittor.nn.LayerNorm
- jittor.nn.ModuleList¶
alias of
jittor.nn.Sequential
- class jittor.nn.PReLU(num_parameters=1, init_=0.25)[源代码]¶
Applies the element-wise function:
\[\begin{split}\text{PReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ ax, & \text{ otherwise } \end{cases}\end{split}\]- 参数
x (jt.Var) – the input var
num_parameters (int, optional) – number of \(a\) to learn, can be either 1 or the number of channels at input. Default: 1
init – the initial value of \(a\). Default: 0.25
init – float, optional
- Example:
>>> a = jt.randn(3) >>> prelu = nn.PReLU() >>> prelu(a) jt.Var([-0.09595093 1.1338731 6.128115 ], dtype=float32)
- jittor.nn.Parameter(data, requires_grad=True)[源代码]¶
The Parameter interface isn’t needed in Jittor, this interface does nothings and it is just used for compatible.
A Jittor Var is a Parameter when it is a member of Module, if you don’t want a Jittor Var menber is treated as a Parameter, just name it startswith underscore _.
- jittor.nn.ParameterDict¶
alias of
jittor.nn.ParameterList
- class jittor.nn.ParameterList(*args)[源代码]¶
- class jittor.nn.RNN(input_size: int, hidden_size: int, num_layers: int = 1, nonlinearity: str = 'tanh', bias: bool = True, batch_first: bool = False, dropout: float = 0, bidirectional: bool = False)[源代码]¶
Applies a multi-layer Elman RNN with tanh ReLU non-linearity to an input sequence.
- 参数
input_size (int) – The number of expected features in the input.
hidden_size (int) – The number of features in the hidden state.
num_layers (int, optinal) – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1
nonlinearity (str, optional) – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’
bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False
dropout (float, optional) – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to dropout. Default: 0
bidirectional (bool, optional) – If True, becomes a bidirectional RNN. Default: False
- Example:
>>> rnn = nn.RNN(10, 20, 2) >>> input = jt.randn(5, 3, 10) >>> h0 = jt.randn(2, 3, 20) >>> output, hn = rnn(input, h0)
- class jittor.nn.RNNBase(mode: str, input_size: int, hidden_size: int, num_layers: int = 1, bias: bool = True, batch_first: bool = False, dropout: float = 0, bidirectional: bool = False, proj_size: int = 0, nonlinearity: Optional[str] = None)[源代码]¶
- class jittor.nn.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh')[源代码]¶
An Elman RNN cell with tanh or ReLU non-linearity.
- 参数
input_size (int) – The number of expected features in the input
hidden_size (int) – The number of features in the hidden state
bias (bool, optional) – If False, then the layer does not use bias weights b_ih and b_hh. Default: True.
nonlinearity (str, optional) – The non-linearity to use. Can be either ‘tanh’ or ‘relu’. Default: ‘tanh’.
Example:
>>> rnn = nn.RNNCell(10, 20) >>> input = jt.randn((6, 3, 10)) >>> hx = jt.randn((3, 20)) >>> output = [] >>> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
- class jittor.nn.Sequential(*args)[源代码]¶
-
- dfs(parents, k, callback, callback_leave, recurse=True)[源代码]¶
An utility function to traverse the module.
- class jittor.nn.Softplus(beta=1, threshold=20)[源代码]¶
SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.
Args:
[in] beta (float): the beta value for the Softplus formulation. Default: 1.
[in] threshold (float): values above this revert to a linear function. Default: 20.
- jittor.nn.backward(v, *args, **kw)[源代码]¶
The backward variable interface doesn’t exist in Jittor. please use optimizer.backward(loss) or optimizer.step(loss) instead. For example, if your code looks like this:
optimizer.zero_grad() loss.backward() optimizer.step()
It can be changed to this:
optimizer.zero_grad() optimizer.backward(loss) optimizer.step()
Or more concise:
optimizer.step(loss)
The step function will automatically zero grad and backward.
- jittor.nn.batch_norm(x, running_mean, running_var, weight=1, bias=0, training=False, momentum=0.1, eps=1e-05)[源代码]¶
- jittor.nn.binary_cross_entropy_with_logits(output, target, weight=None, pos_weight=None, size_average=True)[源代码]¶
- jittor.nn.bmm(a, b)[源代码]¶
batch matrix multiply, shape of input a is [batch, n, m], shape of input b is [batch, m, k], return shape is [batch, n, k]
Example:
import jittor as jt from jittor import nn batch, n, m, k = 100, 5, 6, 7 a = jt.random((batch, n, m)) b = jt.random((batch, m, k)) c = nn.bmm(a, b)
- jittor.nn.conv(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)¶
Applies a 2D convolution over an input signal composed of several input planes.
- 参数
x (jt.Var) – the input image
weight (jt.Var) – the convolution kernel
bias (jt,Var, optional) – the bias after convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
Example:
>>> x = jt.randn(4, 24, 100, 100) >>> w = jt.randn(32, 24, 3, 3) >>> y = nn.conv2d(x, w)
- jittor.nn.conv2d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[源代码]¶
Applies a 2D convolution over an input signal composed of several input planes.
- 参数
x (jt.Var) – the input image
weight (jt.Var) – the convolution kernel
bias (jt,Var, optional) – the bias after convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
Example:
>>> x = jt.randn(4, 24, 100, 100) >>> w = jt.randn(32, 24, 3, 3) >>> y = nn.conv2d(x, w)
- jittor.nn.conv3d(x, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[源代码]¶
Applies a 3D convolution over an input signal composed of several input planes.
- 参数
x (jt.Var) – the input volume
weight (jt.Var) – the convolution kernel
bias (jt,Var, optional) – the bias after convolution
stride (int or tuple, optional) – Stride of the convolution. Default: 1
padding (int or tuple, optional) – Padding added to all four sides of the input. Default: 0
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
Example:
>>> x = jt.randn(4, 24, 50, 50, 50) >>> w = jt.randn(32, 24, 3, 3, 3) >>> y = nn.conv2d(x, w)
- jittor.nn.conv_transpose(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[源代码]¶
- jittor.nn.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)¶
- jittor.nn.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)[源代码]¶
- jittor.nn.cross_entropy_loss(output, target, weight=None, ignore_index=None, reduction='mean')[源代码]¶
- jittor.nn.elu(x: jittor_core.jittor_core.Var, alpha: float = 1.0) jittor_core.jittor_core.Var [源代码]¶
Applies the element-wise function:
\[\begin{split}\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}\end{split}\]- 参数
x (jt.Var) – the input var
alpha – the \(\alpha\) value for the ELU formulation. Default: 1.0
alpha – float, optional
- Example:
>>> a = jt.randn(3) >>> a jt.Var([-0.38380373 -1.1338731 2.128115 ], dtype=float32) >>> nn.elu(a) jt.Var([-0.31873488 -0.6782155 2.128115 ], dtype=float32)
- jittor.nn.gelu(x)[源代码]¶
Applies the element-wise function:
\[\text{GELU}(x) = x * \Phi(x)\]where \(\Phi(x)\) is the Cumulative Distribution Function for Gaussian Distribution.
- 参数
x (jt.Var) – the input var
- Example:
>>> a = jt.randn(3) >>> a jt.Var([-0.38380373 -1.1338731 2.128115 ], dtype=float32) >>> nn.gelu(a) jt.Var([-0.134547 0.9882567 6.128115 ], dtype=float32)
- jittor.nn.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=False)[源代码]¶
- jittor.nn.grid_sample_v0(input, grid, mode='bilinear', padding_mode='zeros')[源代码]¶
Given an input and a flow-field grid, computes the output using input values and pixel locations from grid.
grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of [-1, 1]. For example, values x = -1, y = -1 is the left-top pixel of input, and values x = 1, y = 1 is the right-bottom pixel of input.
Args:
[in] input (var): the source input var, whose shape is (N, C, Hi, Wi)
[in] grid (var): the pixel locations, whose shape is (N, Ho, Wo, 2)
[in] mode (string): the interpolate way, default: bilinear.
[in] padding_mode (string): the padding way, default: zeros.
[out] output (var): the output var, whose shape is (N, C, Ho, Wo)
Example:
>>> x = jt.array([[[[1,2],[3,4]]]]) >>> print(x) [[[[1 2] [3 4]]]]
>>> grid = jt.array([[[[0.5, 0.5]]]]) >>> print(x.shape, grid.shape) [1,1,2,2,], [1,1,2,2,]
>>> nn.grid_sample(x, grid) [[[[3.25]]]]
- jittor.nn.instance_norm(x, running_mean=None, running_var=None, weight=1, bias=0, momentum=0.1, eps=1e-05)[源代码]¶
- jittor.nn.interpolate(X, size=None, scale_factor=None, mode='bilinear', align_corners=False, tf_mode=False)[源代码]¶
- jittor.nn.leaky_relu(x, scale=0.01)[源代码]¶
Applies the element-wise function:
\[\begin{split}\text{LeakyRELU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \text{scale} \times x, & \text{ otherwise } \end{cases}\end{split}\]- 参数
x (jt.Var) – the input var
scale – the \(\scale\) value for the leaky relu formulation. Default: 0.01
scale – float, optional
- Example:
>>> a = jt.randn(3) >>> a jt.Var([-0.38380373 1.1338731 6.128115 ], dtype=float32) >>> nn.leaky_relu(a) jt.Var([-3.8380371e-03 1.1338731e+00 6.1281152e+00], dtype=float32)
- jittor.nn.matmul(a, b)[源代码]¶
matrix multiply,
Example:
a = jt.random([3]) b = jt.random([3]) c = jt.matmul(a, b) assert c.shape == [1] a = jt.random([3, 4]) b = jt.random([4]) c = jt.matmul(a, b) assert c.shape == [3] a = jt.random([10, 3, 4]) b = jt.random([4]) c = jt.matmul(a, b) assert c.shape == [10, 3] a = jt.random([10, 3, 4]) b = jt.random([4, 5]) c = jt.matmul(a, b) assert c.shape == [10, 3, 5] a = jt.random([10, 3, 4]) b = jt.random([10, 4, 5]) c = jt.matmul(a, b) assert c.shape == [10, 3, 5] a = jt.random([8, 1, 3, 4]) b = jt.random([10, 4, 5]) c = jt.matmul(a, b) assert c.shape == [8, 10, 3, 5]
- jittor.nn.one_hot(x: jittor_core.jittor_core.Var, num_classes: int = - 1) jittor_core.jittor_core.Var [源代码]¶
Returns the one_hot encoding of inputs.
- 参数
x (jt.Var with bool or integer dtype) – class values of any shape
num_classes (int, optional) – Total number of classes. If set to -1, the number of classes will be inferred as one greater than the largest class value in the input tensor.
- 返回
a Var with one more dimension with 1 values at the index
of last dimension indicated by the input, and 0 everywhere else. :rtype: jt.Var
注解
if the values in x are greater than num_class or less than 0, the returned one_hot will be all zeros.
- Example:
>>> jt.nn.one_hot(jt.arange(5) % 3) jt.Var([[1 0 0] [0 1 0] [0 0 1] [1 0 0] [0 1 0]], dtype=int32) >>> jt.nn.one_hot(jt.arange(5) % 3, num_classes=5) jt.Var([[1 0 0 0 0] [0 1 0 0 0] [0 0 1 0 0] [1 0 0 0 0] [0 1 0 0 0]], dtype=int32) >>> jt.nn.one_hot(jt.arange(6).reshape(3,2) % 3) jt.Var([[[1 0 0] [0 1 0]]
[[0 0 1] [1 0 0]]
[[0 1 0] [0 0 1]]], dtype=int32)
- jittor.nn.relu(x)[源代码]¶
Applies the element-wise function:
\[\text{ReLU6}(x) = \max(0,x)\]- 参数
x (jt.Var) – the input var
- Example:
>>> a = jt.randn(3) >>> a jt.Var([-0.38380373 1.1338731 6.128115 ], dtype=float32) >>> nn.relu(a) jt.Var([0. 1.1338731 6.128115 ], dtype=float32)
- jittor.nn.relu6(x)[源代码]¶
Applies the element-wise function:
\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]- 参数
x (jt.Var) – the input var
- Example:
>>> a = jt.randn(3) >>> a jt.Var([-0.38380373 1.1338731 6.128115 ], dtype=float32) >>> nn.relu6(a) jt.Var([0. 1.1338731 6. ], dtype=float32)
- jittor.nn.sign(x: jittor_core.jittor_core.Var) jittor_core.jittor_core.Var [源代码]¶
returns the signs of elements of x
- 参数
x (jt.Var) – the input Var
- Example:
>>> a = jt.float32([0.99, 0, -0.99]) >>> nn.sign(a) jt.Var([ 1. 0. -1.], dtype=float32)
- jittor.nn.silu(x)[源代码]¶
Applies the element-wise function:
\[\text{SILU}(x) = x * Sigmoid(x)\]- 参数
x (jt.Var) – the input var
- Example:
>>> a = jt.randn(3) >>> a jt.Var([-0.38380373 -1.1338731 2.128115 ], dtype=float32) >>> nn.silu(a) jt.Var([-0.15552104 -0.27603802 1.9016962 ], dtype=float32)
- jittor.nn.smooth_l1_loss(y_true, y_pred, reduction='mean')[源代码]¶
Implements Smooth-L1 loss. y_true and y_pred are typically: [N, 4], but could be any shape.
- Args:
y_true - ground truth y_pred - predictions reduction - the mode of cal loss which must be in [‘mean’,’sum’,’none’]
- jittor.nn.upsample(img, size, mode='nearest', align_corners=False, tf_mode=False)¶
- class jittor.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[源代码]¶
- class jittor.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[源代码]¶
- class jittor.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False)[源代码]¶
- class jittor.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False)[源代码]¶
- class jittor.nn.MaxUnpool2d(kernel_size, stride=None)[源代码]¶
MaxUnpool2d is the invert version of MaxPool2d with indices. It takes the output index of MaxPool2d as input. The element will be zero if it is not the max pooled value.
Example:
>>> import jittor as jt >>> from jittor import nn
>>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) >>> unpool = nn.MaxUnpool2d(2, stride=2) >>> input = jt.array([[[[ 1., 2, 3, 4,0], [ 5, 6, 7, 8,0], [ 9, 10, 11, 12,0], [13, 14, 15, 16,0], [0, 0, 0, 0, 0]]]]) >>> output, indices = pool(input) >>> unpool(output, indices, output_size=input.shape) jt.array([[[[ 0., 0., 0., 0., 0.], [ 0., 6., 0., 8., 0.], [ 0., 0., 0., 0., 0.], [ 0., 14., 0., 16., 0.], [ 0., 0., 0., 0., 0.]]]])
- class jittor.nn.MaxUnpool3d(kernel_size, stride=None)[源代码]¶
MaxUnpool3d is the invert version of MaxPool3d with indices. It takes the output index of MaxPool3d as input. The element will be zero if it is not the max pooled value.
- class jittor.nn.Pool(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False, count_include_pad=True, op='maximum')[源代码]¶
- class jittor.nn.Pool3d(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False, count_include_pad=True, op='maximum')[源代码]¶
- jittor.nn.avg_pool2d(x, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)[源代码]¶
- jittor.nn.max_pool2d(x, kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False)[源代码]¶
- jittor.nn.max_pool3d(x, kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False)[源代码]¶
- jittor.nn.ReLU¶
alias of
jittor.make_module.<locals>.MakeModule
- jittor.nn.ReLU6¶
alias of
jittor.make_module.<locals>.MakeModule
- jittor.nn.LeakyReLU¶
alias of
jittor.make_module.<locals>.MakeModule
- jittor.nn.Softmax¶
alias of
jittor.make_module.<locals>.MakeModule