jittor.nn

这里是Jittor的神经网络模块的API文档,您可以通过from jittor import nn来获取该模块。

class jittor.nn.BCELoss[源代码]
execute(output, target, size_average=True)[源代码]
class jittor.nn.BCEWithLogitsLoss[源代码]
execute(output, target, size_average=True)[源代码]
class jittor.nn.BatchNorm(num_features, eps=1e-05, momentum=0.1, affine=None, is_train=True, sync=True)[源代码]
execute(x)[源代码]
class jittor.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=None, is_train=True, sync=True)[源代码]
execute(x)[源代码]
class jittor.nn.ConstantPad2d(padding, value)[源代码]
execute(x)[源代码]
class jittor.nn.Conv(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)[源代码]
execute(x)[源代码]
class jittor.nn.ConvTranspose(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)[源代码]
execute(x)[源代码]
class jittor.nn.CrossEntropyLoss[源代码]
execute(output, target)[源代码]
class jittor.nn.Dropout(p=0.5, is_train=False)[源代码]
execute(input)[源代码]
class jittor.nn.Embedding(num, dim)[源代码]
execute(x)[源代码]
class jittor.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=None, is_train=True)[源代码]
execute(x)[源代码]
class jittor.nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=None, is_train=True, sync=True)[源代码]
execute(x)[源代码]
class jittor.nn.L1Loss[源代码]
execute(output, target)[源代码]
class jittor.nn.Linear(in_features, out_features, bias=True)[源代码]
execute(x)[源代码]
class jittor.nn.MSELoss[源代码]
execute(output, target)[源代码]
jittor.nn.ModuleList

jittor.nn.Sequential 的别名

class jittor.nn.PReLU(num_parameters=1, init_=0.25)[源代码]
execute(x)[源代码]
class jittor.nn.PixelShuffle(upscale_factor)[源代码]
execute(x)[源代码]
class jittor.nn.ReflectionPad2d(padding)[源代码]
execute(x)[源代码]
class jittor.nn.ReplicationPad2d(padding)[源代码]
execute(x)[源代码]
class jittor.nn.Resize(size, mode='nearest', align_corners=False)[源代码]
execute(x)[源代码]
class jittor.nn.Sequential(*args)[源代码]
append(mod)[源代码]
dfs(parents, k, callback, callback_leave)[源代码]
execute(x)[源代码]
class jittor.nn.Sigmoid[源代码]
execute(x)[源代码]
class jittor.nn.Softplus(beta=1, threshold=20)[源代码]

SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.

Args:

[in] beta (float): the beta value for the Softplus formulation. Default: 1.

[in] threshold (float): values above this revert to a linear function. Default: 20.

execute(x)[源代码]
class jittor.nn.Tanh[源代码]
execute(x)[源代码]
class jittor.nn.Upsample(scale_factor=None, mode='nearest')[源代码]
execute(x)[源代码]
class jittor.nn.ZeroPad2d(padding)[源代码]
execute(x)[源代码]
jittor.nn.bce_loss(output, target, size_average=True)[源代码]
jittor.nn.bmm(a, b)[源代码]

batch matrix multiply, shape of input a is [batch, n, m], shape of input b is [batch, m, k], return shape is [batch, n, k]

Example:

import jittor as jt
from jittor import nn

batch, n, m, k = 100, 5, 6, 7

a = jt.random((batch, n, m))
b = jt.random((batch, m, k))
c = nn.bmm(a, b)
jittor.nn.cross_entropy_loss(output, target, ignore_index=None)[源代码]
jittor.nn.get_init_var_rand(shape, dtype)[源代码]
jittor.nn.grid_sample(input, grid, mode='bilinear', padding_mode='zeros')[源代码]

Given an input and a flow-field grid, computes the output using input values and pixel locations from grid.

grid specifies the sampling pixel locations normalized by the input spatial dimensions. Therefore, it should have most values in the range of [-1, 1]. For example, values x = -1, y = -1 is the left-top pixel of input, and values x = 1, y = 1 is the right-bottom pixel of input.

Args:

[in] input (var): the source input var, whose shape is (N, C, Hi, Wi)

[in] grid (var): the pixel locations, whose shape is (N, Ho, Wo, 2)

[in] mode (string): the interpolate way, default: bilinear.

[in] padding_mode (string): the padding way, default: zeros.

[out] output (var): the output var, whose shape is (N, C, Ho, Wo)

Example:

>>> x = jt.array([[[[1,2],[3,4]]]])
>>> print(x)
[[[[1 2]
[3 4]]]] 
>>> grid = jt.array([[[[0.5, 0.5]]]])
>>> print(x.shape, grid.shape)
[1,1,2,2,], [1,1,2,2,]
>>> nn.grid_sample(x, grid)
[[[[3.25]]]]
jittor.nn.l1_loss(output, target)[源代码]
jittor.nn.leaky_relu(x, scale=0.01)[源代码]
jittor.nn.matmul(a, b)[源代码]

matrix multiply,

Example:

a = jt.random([3])
b = jt.random([3])
c = jt.matmul(a, b)
assert c.shape == [1]

a = jt.random([3, 4])
b = jt.random([4])
c = jt.matmul(a, b)
assert c.shape == [3]

a = jt.random([10, 3, 4])
b = jt.random([4])
c = jt.matmul(a, b)
assert c.shape == [10, 3]

a = jt.random([10, 3, 4])
b = jt.random([4, 5])
c = jt.matmul(a, b)
assert c.shape == [10, 3, 5]

a = jt.random([10, 3, 4])
b = jt.random([10, 4, 5])
c = jt.matmul(a, b)
assert c.shape == [10, 3, 5]

a = jt.random([8, 1, 3, 4])
b = jt.random([10, 4, 5])
c = jt.matmul(a, b)
assert c.shape == [8, 10, 3, 5]
jittor.nn.matmul_transpose(a, b)[源代码]

returns a * b^T

jittor.nn.mse_loss(output, target)[源代码]
jittor.nn.relu(x)[源代码]
jittor.nn.relu6(x)[源代码]
jittor.nn.resize(img, size, mode='nearest', align_corners=False)[源代码]
jittor.nn.softmax(x, dim=None)[源代码]
jittor.nn.upsample(img, size, mode='nearest', align_corners=False)[源代码]
class jittor.nn.AdaptiveAvgPool2d(output_size)[源代码]
execute(x)[源代码]
class jittor.nn.Pool(kernel_size, stride=None, padding=0, dilation=None, return_indices=None, ceil_mode=False, count_include_pad=True, op='maximum')[源代码]
execute(x)[源代码]
jittor.nn.pool(x, kernel_size, op, padding=0, stride=1)[源代码]
jittor.nn.ReLU

jittor.make_module.<locals>.MakeModule 的别名

jittor.nn.ReLU6

jittor.make_module.<locals>.MakeModule 的别名

jittor.nn.LeakyReLU

jittor.make_module.<locals>.MakeModule 的别名

jittor.nn.Softmax

jittor.make_module.<locals>.MakeModule 的别名