jittor¶
jittor¶
这里是Jittor主模块的API文档,您可以通过import jittor
来获取该模块。
- class jittor.Function(*args, **kw)[源代码]¶
Function Module for customized backward operations
Example 1 (Function can have multiple input and multiple output, and user can store value for backward computation):
import jittor as jt from jittor import Function class MyFunc(Function): def execute(self, x, y): self.x = x self.y = y return x*y, x/y def grad(self, grad0, grad1): return grad0 * self.y, grad1 * self.x a = jt.array(3.0) b = jt.array(4.0) func = MyFunc.apply c,d = func(a, b) da, db = jt.grad(c+d*3, [a, b]) assert da.data == 4 assert db.data == 9
Example 2(Function can return None for no gradiant, and gradiant can also be None):
import jittor as jt from jittor import Function class MyFunc(Function): def execute(self, x, y): self.x = x self.y = y return x*y, x/y def grad(self, grad0, grad1): assert grad1 is None return grad0 * self.y, None a = jt.array(3.0) b = jt.array(4.0) func = MyFunc.apply c,d = func(a, b) d.stop_grad() da, db = jt.grad(c+d*3, [a, b]) assert da.data == 4 assert db.data == 0
- class jittor.GradHooker(hook)[源代码]¶
- class jittor.Module(*args, **kw)[源代码]¶
-
- dfs(parents, k, callback, callback_leave=None, recurse=True)[源代码]¶
An utility function to traverse the module.
- execute(*args, **kw)[源代码]¶
Executes the module computation.
Raises NotImplementedError if the subclass does not override the method.
- float_auto()[源代码]¶
convert all parameters to float16 or float32 automatically by jt.flags.auto_mixed_precision_level and jt.flags.amp_reg
- is_training() jittor_core.ops.bool [源代码]¶
Returns whether the module is in training mode.
- load(path: str)[源代码]¶
loads parameters from a file.
- 参数
path (str) – path to load.
Example:
>>> class Net(nn.Module): >>> ... >>> net = Net() >>> net.save('net.pkl') >>> net.load('net.pkl')
This method also supports loading a state dict from a pytorch .pth file.
注解
当载入的参数与模型定义不一致时, jittor 会输出错误信息, 但是不会抛出异常. 若载入参数出现模型定义中没有的参数名, 则会输出如下信息, 并忽略此参数:
>>> [w 0205 21:49:39.962762 96 __init__.py:723] load parameter w failed ...
若载入参数的 shape 与模型定义不一致, 则会输出如下信息, 并忽略此参数:
>>> [e 0205 21:49:39.962822 96 __init__.py:739] load parameter w failed: expect the shape of w to be [1000,100,], but got [3,100,100,]
如载入过程中出现错误, jittor 会输出概要信息, 您需要仔细核对错误信息
>>> [w 0205 21:49:39.962906 96 __init__.py:741] load total 100 params, 3 failed
- load_parameters(params)[源代码]¶
loads parameters to the Module.
- 参数
params – dictionary of parameter names and parameters.
- modules() List [源代码]¶
Returns a list of sub-modules in the module recursively.
Example:
>>> net = nn.Sequential(nn.Linear(2, 10), nn.ReLU(), nn.Linear(10, 2)) >>> net.modules() [Sequential( 0: Linear(2, 10, float32[10,], None) 1: relu() 2: Linear(10, 2, float32[2,], None) ), Linear(2, 10, float32[10,], None), relu(), Linear(10, 2, float32[2,], None)]
- named_modules()[源代码]¶
Returns a list of sub-modules and their names recursively.
Example:
>>> net = nn.Sequential(nn.Linear(2, 10), nn.ReLU(), nn.Linear(10, 2)) >>> net.named_modules() [('', Sequential( 0: Linear(2, 10, float32[10,], None) 1: relu() 2: Linear(10, 2, float32[2,], None) )), ('0', Linear(2, 10, float32[10,], None)), ('1', relu()), ('2', Linear(10, 2, float32[2,], None))]
- named_parameters(recurse=True) List[Tuple[str, jittor_core.jittor_core.Var]] [源代码]¶
Returns a list of module parameters and their names.
Example:
>>> net = nn.Linear(2, 5) >>> net.named_parameters() [('weight', jt.Var([[ 0.5964666 -0.3175258 ] [ 0.41493994 -0.66982657] [-0.32677156 0.49614117] [-0.24102807 -0.08656466] [ 0.15868133 -0.12468725]], dtype=float32)), ('bias', jt.Var([-0.38282675 0.36271113 -0.7063226 0.02899247 0.52210844], dtype=float32))]
- parameters(recurse=True) List [源代码]¶
Returns a list of module parameters.
Example:
>>> net = nn.Sequential(nn.Linear(2, 10), nn.ReLU(), nn.Linear(10, 2)) >>> for p in net.parameters(): ... print(p.name) ... >>> for p in net.parameters(): ... print(p.name()) ... 0.weight 0.bias 2.weight 2.bias
- register_backward_hook(func)[源代码]¶
hook both input and output on backpropergation of this module.
Arguments of hook are defined as:
hook(module, grad_input:tuple(jt.Var), grad_output:tuple(jt.Var)) -> tuple(jt.Var) or None
grad_input is the origin gradients of input of this module, grad_input is the gradients of output of this module, return value is used to replace the gradient of input.
- register_forward_hook(func)[源代码]¶
Register a forward function hook that will be called after Module.execute.
The hook function will be called with the following arguments:
hook(module, input_args, output)
- or::
hook(module, input_args, output, input_kwargs)
- register_pre_forward_hook(func)[源代码]¶
Register a forward function hook that will be called before Module.execute.
The hook function will be called with the following arguments:
hook(module, input_args)
- or::
hook(module, input_args, input_kwargs)
- save(path: str)[源代码]¶
saves parameters to a file.
- 参数
path (str) – path to save.
Example:
>>> class Net(nn.Module): >>> ... >>> net = Net() >>> net.save('net.pkl') >>> net.load('net.pkl')
- state_dict(to=None, recurse=True)[源代码]¶
Returns a dictionary containing Jittor Var of the module and its descendants.
- Args:
to: target type of var, canbe None or ‘numpy’ or ‘torch’
- Return:
dictionary of module’s states.
Example:
import jittor as jt from jittor.models import resnet50 jittor_model = resnet50() dict = jittor_model.state_dict() jittor_model.load_state_dict(dict)
Example2(export Jittor params to PyTorch):
import jittor as jt from jittor.models import resnet50 jittor_model = resnet50() import torch from torchvision.models import resnet50 torch_model = resnet50() torch_model.load_state_dict(jittor_model.state_dict(to="torch"))
- property training¶
- jittor.argmax(x: jittor_core.jittor_core.Var, dim: int, keepdims: jittor_core.ops.bool = False)[源代码]¶
Returns the indices and values of the maximum elements along the specified dimension.
- 参数
x (jt.Var, numpy array, or python sequence.) – the input Var.
dim (int.) – the dimension to reduce.
keepdims (bool, optional) – whether the output Var has dim retained or not. Defaults to False
Example:
>>> a = jt.randn((2, 4)) >>> a jt.Var([[-0.33272865 -0.4951588 1.4128606 0.13734372] [-1.633469 0.19593953 -0.7803732 -0.5260756 ]], dtype=float32) >>> a.argmax(dim=0) (jt.Var([0 1 0 0], dtype=int32), jt.Var([-0.33272865 0.19593953 1.4128606 0.13734372], dtype=float32)) >>> a.argmax(dim=1) (jt.Var([2 1], dtype=int32), jt.Var([1.4128606 0.19593953], dtype=float32))
- jittor.argmin(x, dim: int, keepdims: jittor_core.ops.bool = False)[源代码]¶
Returns the indices and values of the minimum elements along the specified dimension.
- 参数
x (jt.Var, numpy array, or python sequence.) – the input Var.
dim (int.) – the dimension to reduce.
keepdims (bool, optional) – whether the output Var has dim retained or not. Defaults to False
Example:
>>> a = jt.randn((2, 4)) >>> a jt.Var([[-0.33272865 -0.4951588 1.4128606 0.13734372] [-1.633469 0.19593953 -0.7803732 -0.5260756 ]], dtype=float32) >>> a.argmin(dim=0) (jt.Var([1 0 1 1], dtype=int32), jt.Var([-1.633469 -0.4951588 -0.7803732 -0.5260756], dtype=float32)) >>> a.argmin(dim=1) (jt.Var([1 0], dtype=int32), jt.Var([-0.4951588 -1.633469 ], dtype=float32))
- jittor.array(data, dtype=None)[源代码]¶
Constructs a jittor Var from a number, List, numpy array or another jittor Var.
- 参数
data (number, list, numpy.ndarray, or jittor.Var.) – The data to initialize the Var.
dtype (str, jittor type-cast function, or None.) – The data type of the Var. If None, the data type will be inferred from the data.
Example:
>>> jt.array(1) jt.Var([1], dtype=int32) >>> jt.array([0, 2.71, 3.14]) jt.Var([0. 2.71 3.14], dtype=float32) >>> jt.array(np.arange(4, dtype=np.uint8)) jt.Var([0 1 2 3], dtype=uint8)
- jittor.cat(*args, **kw)¶
- jittor.clamp_(x, min_v=None, max_v=None)[源代码]¶
In-place version of clamp().
- Args:
- x (Jittor Var):
the input var
min_v ( Number or Var, optional) - lower-bound of clamp range max_v ( Number or Var, optional) - upper-bound of clamp range
- Return:
x itself after clamp.
- jittor.concat(*args, **kw)¶
- jittor.dirty_fix_pytorch_runtime_error()[源代码]¶
This funtion should be called before pytorch.
Example:
import jittor as jt jt.dirty_fix_pytorch_runtime_error() import torch
- class jittor.enable_grad(**jt_flags)[源代码]¶
enable_grad scope, all variable created inside this scope will start grad.
Example:
import jittor as jt with jt.enable_grad(): ...
- jittor.fetch(*args)[源代码]¶
Async fetch vars with function closure.
Example 1:
for img,label in enumerate(your_dataset): pred = your_model(img) loss = critic(pred, label) acc = accuracy(pred, label) jt.fetch(acc, loss, lambda acc, loss: print(f"loss:{loss} acc:{acc}" )
Example 2:
for i,(img,label) in enumerate(your_dataset): pred = your_model(img) loss = critic(pred, label) acc = accuracy(pred, label) # variable i will be bind into function closure jt.fetch(i, acc, loss, lambda i, acc, loss: print(f"#{i}, loss:{loss} acc:{acc}" )
- jittor.full(shape, val, dtype='float32')[源代码]¶
Constructs a jittor Var with all elements set to val.
- 参数
shape (list or tuple.) – The shape of the output Var.
val (number.) – The value of the output Var.
dtype (str, jittor type-cast function, or None.) – The data type of the output Var. Defaults to jt.float32.
- 返回
The output Var.
- 返回类型
jittor.Var
- jittor.full_like(x, val, dtype=None) jittor_core.jittor_core.Var [源代码]¶
Constructs a jittor Var with all elements set to val and shape same with x.
- 参数
x (jt.Var.) – The reference jittor Var.
val (number.) – The value of the output Var.
dtype (str, optional) – if None, the dtype of the output is the same as x. Otherwise, use the specified dtype. Defaults to None.
- 返回
The output Var.
- 返回类型
jittor.Var
- class jittor.log_capture_scope(**jt_flags)[源代码]¶
log capture scope
Example:
with jt.log_capture_scope(log_v=0) as logs: LOG.v("...") print(logs)
- class jittor.no_grad(**jt_flags)[源代码]¶
no_grad scope, all variable created inside this scope will stop grad.
Example:
import jittor as jt with jt.no_grad(): ...
- jittor.normal(mean, std, size=None, dtype='float32') jittor_core.jittor_core.Var [源代码]¶
samples random values from a normal distribution.
- 参数
mean (int or jt.Var) – means of the normal distributions.
std (int or jt.Var) – standard deviations of the normal distributions.
size (tuple, optional) – shape of the output size. if not specified, the shape of the output is determined by mean or std. Exception will be raised if mean and std are all integers or have different shape in this case. Defaults to None
dtype (str, optional) – data type of the output, defaults to “float32”.
Example:
>>> jt.normal(5, 3, size=(2,3)) jt.Var([[ 8.070848 7.654219 10.252696 ] [ 6.383718 7.8817277 3.0786133]], dtype=float32) >>> mean = jt.randint(low=0, high=10, shape=(10,)) >>> jt.normal(mean, 0.1) jt.Var([1.9524184 1.0749301 7.9864206 5.9407325 8.1596155 4.824019 7.955083 8.972998 6.0674286 8.88026 ], dtype=float32)
- jittor.ones(*shape, dtype='float32')[源代码]¶
Constructs a jittor Var with all elements set to 1.
- 参数
shape (list or tuple.) – The shape of the output Var.
dtype (str, jittor type-cast function, or None.) – The data type of the output Var.
- 返回
The output Var.
- 返回类型
jittor.Var
- jittor.ones_like(x)[源代码]¶
Constructs a jittor Var with all elements set to 1 and shape same with x.
- 参数
x (jt.Var) – The reference jittor Var.
- 返回
The output Var.
- 返回类型
jittor.Var
- jittor.outer(x, y)[源代码]¶
Returns the outer product of two 1-D vectors.
- 参数
x (jt.Var, numpy array, or python sequence.) – the input Var.
y (jt.Var, numpy array, or python sequence.) – the input Var.
Example:
>>> x = jt.arange(3) >>> y = jt.arange(4) >>> jt.outer(x, y)
- jt.Var([[0 0 0 0]
[0 1 2 3] [0 2 4 6]], dtype=int32)
>>> x.outer(y) jt.Var([[0 0 0 0] [0 1 2 3] [0 2 4 6]], dtype=int32)
- jittor.permute(x, *dim)¶
Declaration: VarHolder* transpose(VarHolder* x, NanoVector axes=NanoVector())
- jittor.pow(x, y)[源代码]¶
computes x^y, element-wise.
This operation is equivalent to
x ** y
.- 参数
x (a python number or jt.Var.) – the first input.
y (a python number or jt.Var.) – the second input.
- class jittor.profile_scope(warmup=0, rerun=0, **jt_flags)[源代码]¶
profile scope
example:
with jt.profile_scope() as report: ...... print(report)
- jittor.rand(*size, dtype='float32', requires_grad=True) jittor_core.jittor_core.Var [源代码]¶
samples random numbers from a uniform distribution on the interval [0, 1).
- 参数
size (int or a sequence of int) – shape of the output.
dtype (str, optional) – data type, defaults to “float32”.
requires_grad (bool, optional) – whether to enable gradient back-propgation. defaults to True.
Example:
>>> jt.rand(3) jt.Var([0.31005102 0.02765604 0.8150749 ], dtype=float32) >>> jt.rand(2, 3) jt.Var([[0.96414304 0.3519264 0.8268017 ] [0.05658621 0.04449705 0.86190987]], dtype=float32)
- jittor.rand_like(x, dtype=None) jittor_core.jittor_core.Var [源代码]¶
samples random values from standard uniform distribution with the same shape as x.
- 参数
x (jt.Var) – reference variable.
dtype (str, optional) – if None, the dtype of the output is the same as x. Otherwise, use the specified dtype. Defaults to None.
Example:
>>> x = jt.zeros((2, 3)) >>> jt.rand_like(x) jt.Var([[0.6164821 0.21476883 0.61959815] [0.58626485 0.35345772 0.5638483 ]], dtype=float32)
- jittor.randint(low, high=None, shape=(1,), dtype='int32') jittor_core.jittor_core.Var [源代码]¶
samples random integers from a uniform distribution on the interval [low, high).
- 参数
low (int, optional) – lowest intergers to be drawn from the distribution, defaults to 0.
high (int) – One above the highest integer to be drawn from the distribution.
shape (tuple, optional) – shape of the output size, defaults to (1,).
dtype (str, optional) – data type of the output, defaults to “int32”.
Example:
>>> jt.randint(3, shape=(3, 3)) jt.Var([[2 0 2] [2 1 2] [2 0 1]], dtype=int32) >>> jt.randint(1, 3, shape=(3, 3)) jt.Var([[2 2 2] [1 1 2] [1 1 1]], dtype=int32)
- jittor.randint_like(x, low, high=None) jittor_core.jittor_core.Var [源代码]¶
samples random values from standard normal distribution with the same shape as x.
- 参数
x (jt.Var) – reference variable.
low (int, optional) – lowest intergers to be drawn from the distribution, defaults to 0.
high (int) – One above the highest integer to be drawn from the distribution.
Example:
>>> x = jt.zeros((2, 3)) >>> jt.randint_like(x, 10) jt.Var([[9. 3. 4.] [4. 8. 5.]], dtype=float32) >>> jt.randint_like(x, 10, 20) jt.Var([[17. 11. 18.] [14. 17. 15.]], dtype=float32)
- jittor.randn(*size, dtype='float32', requires_grad=True) jittor_core.jittor_core.Var [源代码]¶
samples random numbers from a standard normal distribution.
- 参数
size (int or a sequence of int) – shape of the output.
dtype (str, optional) – data type, defaults to “float32”.
requires_grad (bool, optional) – whether to enable gradient back-propgation, defaults to True.
Example:
>>> jt.randn(3) jt.Var([-1.019889 -0.30377278 -1.4948598 ], dtype=float32) >>> jt.randn(2, 3) jt.Var([[-0.15989183 -1.5010914 0.5476955 ] [-0.612632 -1.1471151 -1.1879086 ]], dtype=float32)
- jittor.randn_like(x, dtype=None) jittor_core.jittor_core.Var [源代码]¶
samples random values from standard normal distribution with the same shape as x.
- 参数
x (jt.Var) – reference variable.
dtype (str, optional) – if None, the dtype of the output is the same as x. Otherwise, use the specified dtype. Defaults to None.
Example:
>>> x = jt.zeros((2, 3)) >>> jt.randn_like(x) jt.Var([[-1.1647032 0.34847224 -1.3061888 ] [ 1.068085 -0.34366122 0.13172573]], dtype=float32)
- jittor.random(shape, dtype='float32', type='uniform')[源代码]¶
Constructs a random jittor Var.
- 参数
shape (list or tuple.) – The shape of the random Var.
dtype (str, jittor type-cast function, or None.) – The data type of the random Var.
type (str) – The random distribution, can be ‘uniform’ or ‘normal’.
Example:
>>> jt.random((2, 3)) jt.Var([[0.96788853 0.28334728 0.30482838] [0.46107793 0.62798643 0.03457401]], dtype=float32)
- jittor.register_hook(v, hook)[源代码]¶
register hook of any jittor Variables, if hook return not None, the gradient of this variable will be alter,
Example:
x = jt.array([0.0, 0.0]) y = x * [1,2] y.register_hook(lambda g: g*2) dx = jt.grad(y, x) print(dx) # will be [2, 4]
- jittor.reshape(x, *shape)[源代码]¶
Document: *
Returns a tensor with the same data and number of elements as input, but with the specified shape.
A single dimension may be -1, in which case it’s inferred from the remaining dimensions and the number of elements in input.
[in] x: the input jt.Var
[in] shape: the output shape, an integer array
- Example-1::
>>> a = jt.randint(0, 10, shape=(12,)) >>> a jt.Var([4 0 8 4 6 3 1 8 1 1 2 2], dtype=int32) >>> jt.reshape(a, (3, 4)) jt.Var([[4 0 8 4] [6 3 1 8] [1 1 2 2]], dtype=int32) >>> jt.reshape(a, (-1, 6)) jt.Var([[4 0 8 4 6 3] [1 8 1 1 2 2]], dtype=int32)
Declaration: VarHolder* reshape(VarHolder* x, NanoVector shape)
- jittor.save(params_dict, path: str)[源代码]¶
saves the parameter dictionary to a file.
- 参数
params_dict (list or dictionary) – parameters to be saved
path (str) – file path
- jittor.single_process_scope(rank=0)[源代码]¶
Code in this scope will only be executed by single process.
All the mpi code inside this scope will have not affect. mpi.world_rank() and mpi.local_rank() will return 0, world_size() will return 1,
example:
@jt.single_process_scope(rank=0) def xxx(): ...
- jittor.transpose(x, *dim)[源代码]¶
Declaration: VarHolder* transpose(VarHolder* x, NanoVector axes=NanoVector())
- jittor.var(x, dim=None, dims=None, unbiased=False, keepdims=False)[源代码]¶
return the sample variance. If unbiased is True, Bessel’s correction will be used.
- 参数
x (jt.Var.) – the input jittor Var.
dim (int.) – the dimension to compute the variance. If both dim and dims are None, the variance of the whole tensor will be computed.
dims (tuple of int.) – the dimensions to compute the variance. If both dim and dims are None, the variance of the whole tensor will be computed.
unbiased (bool.) – if True, Bessel’s correction will be used.
keepdim (bool.) – if True, the output shape is same as input shape except for the dimension in dim.
Example:
>>> a = jt.rand(3) >>> a jt.Var([0.79613626 0.29322362 0.19785859], dtype=float32) >>> a.var() jt.Var([0.06888353], dtype=float32) >>> a.var(unbiased=True) jt.Var([0.10332529], dtype=float32)
- jittor.view(x, *shape)¶
Document: *
Returns a tensor with the same data and number of elements as input, but with the specified shape.
A single dimension may be -1, in which case it’s inferred from the remaining dimensions and the number of elements in input.
[in] x: the input jt.Var
[in] shape: the output shape, an integer array
- Example-1::
>>> a = jt.randint(0, 10, shape=(12,)) >>> a jt.Var([4 0 8 4 6 3 1 8 1 1 2 2], dtype=int32) >>> jt.reshape(a, (3, 4)) jt.Var([[4 0 8 4] [6 3 1 8] [1 1 2 2]], dtype=int32) >>> jt.reshape(a, (-1, 6)) jt.Var([[4 0 8 4 6 3] [1 8 1 1 2 2]], dtype=int32)
Declaration: VarHolder* reshape(VarHolder* x, NanoVector shape)
- jittor.zeros(*shape, dtype='float32')[源代码]¶
Constructs a jittor Var with all elements set to 0.
- 参数
shape (list or tuple.) – The shape of the output Var.
dtype (str, jittor type-cast function, or None.) – The data type of the output Var.
- 返回
The output Var.
- 返回类型
jittor.Var
- jittor.zeros_like(x, dtype=None) jittor_core.jittor_core.Var [源代码]¶
Constructs a jittor Var with all elements set to 0 and shape same with x.
- 参数
x (jt.Var) – The reference jittor Var.
dtype (str, optional) – if None, the dtype of the output is the same as x. Otherwise, use the specified dtype. Defaults to None.
- 返回
The output Var.
- 返回类型
jittor.Var
jittor.core¶
以下为Jittor的内核API,内核API可以通过jittor.core.XXX
或者jittor.XXX
直接访问。
- class jittor_core.DumpGraphs¶
- inputs¶
Declaration: vector<vector<int>> inputs;
- nodes_info¶
Declaration: vector<string> nodes_info;
- outputs¶
Declaration: vector<vector<int>> outputs;
- class jittor_core.Flags¶
- addr2line_path¶
- Document:
addr2line_path(type:string, default:””): Path of addr2line.
Declaration: string _get_addr2line_path()
- amp_level¶
- Document:
auto_mixed_precision_level(type:int, default:0): Auto mixed-precision optimization level, 0: not use fp16, 1-3: preserve level, not use fp16 for now; 4: perfer fp16, but some ops use fp32 e.g. sum,exp; 5: simular with 4, and array op will automatically convert to fp16; 6: all ops prefer fp16
Declaration: int _get_auto_mixed_precision_level()
- amp_reg¶
- Document:
amp_reg(type:int, default:0): Auto mixed-precision control registers, bit 0: prefer 32; bit 1: prefer 16; bit 2: keep reduce type; bit 3 keep white list type; bit 4: array like op prefer too
Declaration: int _get_amp_reg()
- auto_convert_64_to_32¶
- Document:
auto_convert_64_to_32(type:int, default:1): auto convert 64bit numpy array into 32bit jittor array
Declaration: int _get_auto_convert_64_to_32()
- auto_mixed_precision_level¶
- Document:
auto_mixed_precision_level(type:int, default:0): Auto mixed-precision optimization level, 0: not use fp16, 1-3: preserve level, not use fp16 for now; 4: perfer fp16, but some ops use fp32 e.g. sum,exp; 5: simular with 4, and array op will automatically convert to fp16; 6: all ops prefer fp16
Declaration: int _get_auto_mixed_precision_level()
- cache_path¶
- Document:
cache_path(type:string, default:””): Cache path of jittor
Declaration: string _get_cache_path()
- cc_flags¶
- Document:
cc_flags(type:string, default:””): Flags of C++ compiler
Declaration: string _get_cc_flags()
- cc_path¶
- Document:
cc_path(type:string, default:””): Path of C++ compiler
Declaration: string _get_cc_path()
- cc_type¶
- Document:
cc_type(type:string, default:””): Type of C++ compiler(clang, icc, g++)
Declaration: string _get_cc_type()
- check_graph¶
- Document:
check_graph(type:int, default:0): Unify graph sanity check.
Declaration: int _get_check_graph()
- compile_options¶
- Document:
compile_options(type:fast_shared_ptr<loop_options_t>, default:{}): Override the default loop transfrom options
Declaration: fast_shared_ptr<loop_options_t> _get_compile_options()
- cpu_mem_limit¶
- Document:
cpu_mem_limit(type:int64, default:-1): cpu_mem_limit
Declaration: int64 _get_cpu_mem_limit()
- cuda_archs¶
- Document:
cuda_archs(type:vector<int>, default:{}): Cuda arch
Declaration: vector<int> _get_cuda_archs()
- device_id¶
- Document:
device_id(type:int, default:-1): number of the device to used
Declaration: int _get_device_id()
- device_mem_limit¶
- Document:
device_mem_limit(type:int64, default:-1): device_mem_limit
Declaration: int64 _get_device_mem_limit()
- disable_lock¶
- Document:
disable_lock(type:bool, default:0): Disable file lock
Declaration: bool _get_disable_lock()
- enable_tuner¶
- Document:
enable_tuner(type:int, default:1): Enable tuner.
Declaration: int _get_enable_tuner()
- exclude_pass¶
- Document:
exclude_pass(type:string, default:””): Don’t run certain pass.
Declaration: string _get_exclude_pass()
- extra_gdb_cmd¶
- Document:
extra_gdb_cmd(type:string, default:””): Extra command pass to GDB, seperate by(;) .
Declaration: string _get_extra_gdb_cmd()
- gdb_attach¶
- Document:
gdb_attach(type:int, default:0): gdb attach self process.
Declaration: int _get_gdb_attach()
- gdb_path¶
- Document:
gdb_path(type:string, default:””): Path of GDB.
Declaration: string _get_gdb_path()
- gopt_disable¶
- Document:
gopt_disable(type:int, default:0): Disable graph optimizer.
Declaration: int _get_gopt_disable()
- has_pybt¶
- Document:
has_pybt(type:int, default:0): GDB has pybt or not.
Declaration: int _get_has_pybt()
- jit_search_kernel¶
- Document:
jit_search_kernel(type:int, default:0): Jit search for the fastest kernel.
Declaration: int _get_jit_search_kernel()
- jit_search_rerun¶
- Document:
jit_search_rerun(type:int, default:10):
Declaration: int _get_jit_search_rerun()
- jit_search_warmup¶
- Document:
jit_search_warmup(type:int, default:2):
Declaration: int _get_jit_search_warmup()
- jittor_path¶
- Document:
jittor_path(type:string, default:””): Source path of jittor
Declaration: string _get_jittor_path()
- l1_cache_size¶
- Document:
l1_cache_size(type:int, default:32768): size of level 1 cache (byte)
Declaration: int _get_l1_cache_size()
- lazy_execution¶
- Document:
lazy_execution(type:int, default:1): Default enabled, if disable, use immediately eager execution rather than lazy execution, This flag makes error message and traceback infomation better. But this flag will raise memory consumption and lower the performance.
Declaration: int _get_lazy_execution()
- log_file¶
- Document:
log_file(type:string, default:””): log to file, mpi env will add $OMPI_COMM_WORLD_RANK suffix
Declaration: string _get_log_file()
- log_op_hash¶
- Document:
log_op_hash(type:string, default:””): Output compiler pass result of certain hash of op.
Declaration: string _get_log_op_hash()
- log_silent¶
- Document:
log_silent(type:int, default:0): The log will be completely silent.
Declaration: int _get_log_silent()
- log_sync¶
- Document:
log_sync(type:int, default:1): Set log printed synchronously.
Declaration: int _get_log_sync()
- log_v¶
- Document:
log_v(type:int, default:0): Verbose level of logging
Declaration: int _get_log_v()
- log_vprefix¶
- Document:
log_vprefix(type:string, default:””): Verbose level of logging prefix
example: log_vprefix=’op=1,node=2,executor.cc:38$=1000’ Declaration: string _get_log_vprefix()
- no_fuse¶
- Document:
no_fuse(type:bool, default:0): No fusion optimization for all jittor Var creation
Declaration: bool _get_no_fuse()
- no_grad¶
- Document:
no_grad(type:bool, default:0): No grad for all jittor Var creation
Declaration: bool _get_no_grad()
- node_order¶
- Document:
node_order(type:uint8, default:0): id prior
Declaration: uint8 _get_node_order()
- nvcc_flags¶
- Document:
nvcc_flags(type:string, default:””): Flags of CUDA C++ compiler
Declaration: string _get_nvcc_flags()
- nvcc_path¶
- Document:
nvcc_path(type:string, default:””): Path of CUDA C++ compiler
Declaration: string _get_nvcc_path()
- para_opt_level¶
- Document:
para_opt_level(type:int, default:3): para_opt_level
Declaration: int _get_para_opt_level()
- profile_memory_enable¶
- Document:
profile_memory_enable(type:int, default:0): Enable memory profiler.
Declaration: int _get_profile_memory_enable()
- profiler_enable¶
- Document:
profiler_enable(type:int, default:0): Enable profiler.
Declaration: int _get_profiler_enable()
- profiler_hide_relay¶
- Document:
profiler_hide_relay(type:int, default:0): Profiler hide relayed op.
Declaration: int _get_profiler_hide_relay()
- profiler_record_peek¶
- Document:
profiler_record_peek(type:int, default:0): Profiler record peek mem bandwidth.
Declaration: int _get_profiler_record_peek()
- profiler_record_shape¶
- Document:
profiler_record_shape(type:int, default:0): Profiler record shape for op.
Declaration: int _get_profiler_record_shape()
- profiler_rerun¶
- Document:
profiler_rerun(type:int, default:0): Profiler rerun.
Declaration: int _get_profiler_rerun()
- profiler_warmup¶
- Document:
profiler_warmup(type:int, default:0): Profiler warmup.
Declaration: int _get_profiler_warmup()
- python_path¶
- Document:
python_path(type:string, default:””): Path of python interpreter
Declaration: string _get_python_path()
- reuse_array¶
- Document:
reuse_array(type:uint8, default:0): try reuse np.array memory into jt.array
Declaration: uint8 _get_reuse_array()
- rewrite_op¶
- Document:
rewrite_op(type:int, default:1): Rewrite source file of jit operator or not
Declaration: int _get_rewrite_op()
- stat_allocator_total_alloc_byte¶
- Document:
stat_allocator_total_alloc_byte(type:size_t, default:0): Total alloc byte
Declaration: size_t _get_stat_allocator_total_alloc_byte()
- stat_allocator_total_alloc_call¶
- Document:
stat_allocator_total_alloc_call(type:size_t, default:0): Number of alloc function call
Declaration: size_t _get_stat_allocator_total_alloc_call()
- stat_allocator_total_free_byte¶
- Document:
stat_allocator_total_free_byte(type:size_t, default:0): Total alloc byte
Declaration: size_t _get_stat_allocator_total_free_byte()
- stat_allocator_total_free_call¶
- Document:
stat_allocator_total_free_call(type:size_t, default:0): Number of alloc function call
Declaration: size_t _get_stat_allocator_total_free_call()
- th_mode¶
- Document:
th_mode(type:uint8, default:0): th mode
Declaration: uint8 _get_th_mode()
- trace_depth¶
- Document:
trace_depth(type:int, default:10): trace depth for GDB.
Declaration: int _get_trace_depth()
- trace_py_var¶
- Document:
trace_py_var(type:int, default:0): Trace py stack max depth for debug.
Declaration: int _get_trace_py_var()
- trace_var_data¶
- Document:
trace_var_data(type:int, default:0): Trace py stack max depth for debug.
Declaration: int _get_trace_var_data()
- try_use_32bit_index¶
- Document:
try_use_32bit_index(type:int, default:0): If not overflow, try to use 32 bit type as index type.
Declaration: int _get_try_use_32bit_index()
- use_acl¶
- Document:
use_cuda(type:int, default:0): Use cuda or not. 1 for trying to use cuda, 2 for forcing to use cuda.
Declaration: int _get_use_cuda()
- use_corex¶
- Document:
use_cuda(type:int, default:0): Use cuda or not. 1 for trying to use cuda, 2 for forcing to use cuda.
Declaration: int _get_use_cuda()
- use_cuda¶
- Document:
use_cuda(type:int, default:0): Use cuda or not. 1 for trying to use cuda, 2 for forcing to use cuda.
Declaration: int _get_use_cuda()
- use_cuda_managed_allocator¶
- Document:
use_cuda_managed_allocator(type:int, default:0): Enable cuda_managed_allocator
Declaration: int _get_use_cuda_managed_allocator()
- use_device¶
- Document:
use_cuda(type:int, default:0): Use cuda or not. 1 for trying to use cuda, 2 for forcing to use cuda.
Declaration: int _get_use_cuda()
- use_nfef_allocator¶
- Document:
use_nfef_allocator(type:int, default:0): Enable never free exact fit allocator
Declaration: int _get_use_nfef_allocator()
- use_parallel_op_compiler¶
- Document:
use_parallel_op_compiler(type:int, default:16): Number of threads that parallel op comiler used, default 16, set this value to 0 will disable parallel op compiler.
Declaration: int _get_use_parallel_op_compiler()
- use_rocm¶
- Document:
use_cuda(type:int, default:0): Use cuda or not. 1 for trying to use cuda, 2 for forcing to use cuda.
Declaration: int _get_use_cuda()
- use_sfrl_allocator¶
- Document:
use_sfrl_allocator(type:int, default:1): Enable sfrl allocator
Declaration: int _get_use_sfrl_allocator()
- use_stat_allocator¶
- Document:
use_stat_allocator(type:int, default:0): Enable stat allocator
Declaration: int _get_use_stat_allocator()
- use_temp_allocator¶
- Document:
use_temp_allocator(type:int, default:1): Enable temp allocator
Declaration: int _get_use_temp_allocator()
- use_tensorcore¶
- Document:
use_tensorcore(type:int, default:0): use tensor core
Declaration: int _get_use_tensorcore()
- use_threading¶
- Document:
use_threading(type:int, default:0): Allow to use python threading with jittor.
Declaration: int _get_use_threading()
- class jittor_core.MemInfo¶
- total_cpu_ram¶
Declaration: int64 total_cpu_ram;
- total_cpu_used¶
Declaration: int64 total_cpu_used;
- total_cuda_ram¶
Declaration: int64 total_cuda_ram;
- total_cuda_used¶
Declaration: int64 total_cuda_used;
- class jittor_core.NanoString¶
- is_bool()¶
Declaration: inline bool is_bool()
- is_float()¶
Declaration: inline bool is_float()
- is_int()¶
Declaration: inline bool is_int()
- class jittor_core.RingBuffer¶
- clear()¶
Declaration: inline void clear()
- is_stop()¶
Declaration: inline bool is_stop()
- keep_numpy_array()¶
Declaration: inline void keep_numpy_array(bool keep)
- pop()¶
Declaration: PyObject* pop()
- push()¶
Declaration: void push(PyObject* obj)
- recv()¶
Declaration: PyObject* pop()
- send()¶
Declaration: void push(PyObject* obj)
- size¶
Declaration: inline uint64 size()
- stop()¶
Declaration: inline void stop()
- total_pop()¶
Declaration: inline uint64 total_pop()
- total_push()¶
Declaration: inline uint64 total_push()
- jittor_core.Var¶
alias of
jittor_core.jittor_core.Var
- class jittor_core.ZipFile¶
- list()¶
Declaration: inline map<string,uint64> list()
- read()¶
Declaration: inline void read(const string& filename, uint64 ptr)
- read_var()¶
Declaration: inline VarHolder* read_var(const string& filename, NanoString dtype=ns_uint8)
- valid()¶
Declaration: inline int valid()
- jittor_core.clean_graph()¶
Document: *
Clean graph, try to reduce memory usage.
This operation will stop grad for all previous nodes.
Backpropegation for previous nodes will be unavailable.
This operation offen used between train and eval.
Declaration: void clean_graph()
- jittor_core.cleanup()¶
Declaration: void cleanup()
- jittor_core.clear_trace_data()¶
Declaration: void clear_trace_data()
- jittor_core.display_max_memory_info()¶
Declaration: void display_max_memory_info()
- jittor_core.display_memory_info()¶
Declaration: void display_memory_info(const char* fileline=””, bool dump_var=false, bool red_color=false)
- jittor_core.dump_all_graphs()¶
Declaration: DumpGraphs dump_all_graphs()
- jittor_core.dump_trace_data()¶
Declaration: PyObject* dump_trace_data()
- jittor_core.fetch_sync()¶
Declaration: vector<ArrayArgs> fetch_sync(const vector<VarHolder*>& vh)
- jittor_core.gc()¶
Declaration: void gc_all()
- jittor_core.get_device_count()¶
Declaration: inline int get_device_count()
- jittor_core.get_max_memory_info()¶
Declaration: string get_max_memory_info()
- jittor_core.get_mem_info()¶
Declaration: inline MemInfo get_mem_info()
- jittor_core.get_seed()¶
Document: * Returns the seed of jittor random number generator.
Declaration: int get_seed()
- jittor_core.grad()¶
Declaration: vector<VarHolder*> _grad(VarHolder* loss, const vector<VarHolder*>& targets, bool retain_graph=true)
- jittor_core.graph_check()¶
Declaration: void do_graph_check()
- jittor_core.hash()¶
- Document:
simple hash function
Declaration: inline uint hash(const char* input)
- jittor_core.jt_init_subprocess()¶
Declaration: void jt_init_subprocess()
- jittor_core.migrate_all_to_cpu()¶
Declaration: void migrate_all_to_cpu()
- jittor_core.number_of_hold_vars()¶
Declaration: inline static uint64 get_number_of_hold_vars()
- jittor_core.number_of_lived_ops()¶
Declaration: inline static int64 get_number_of_lived_ops()
- jittor_core.number_of_lived_vars()¶
Declaration: inline static int64 get_number_of_lived_vars()
- jittor_core.print_trace()¶
Declaration: inline static void __print_trace()
- jittor_core.seed()¶
Document: * Sets the seed of jittor random number generator. Also see @jittor.set_global_seed.
[in] seed: a python number.
Declaration: void set_seed(int seed)
- jittor_core.set_lock_path()¶
Declaration: void set_lock_path(string path)
- jittor_core.set_seed()¶
Document: * Sets the seed of jittor random number generator. Also see @jittor.set_global_seed.
[in] seed: a python number.
Declaration: void set_seed(int seed)
- jittor_core.sync()¶
Declaration: void sync(const vector<VarHolder*>& vh=vector<VarHolder*>(), bool device_sync=false, bool weak_sync=true)
- jittor_core.sync_all()¶
Declaration: void sync_all(bool device_sync=false)
- jittor_core.tape_together()¶
Declaration: void tape_together(
const vector<VarHolder*>& taped_inputs, const vector<VarHolder*>& taped_outputs, GradCallback&& grad_callback
)
- jittor_core.ternary_out_hint()¶
Declaration: VarHolder* ternary_out_hint(VarHolder* cond, VarHolder* x, VarHolder* y)
jittor.ops¶
这里是Jittor的基础算子模块的API文档,该API可以通过jittor.ops.XXX
或者jittor.XXX
直接访问。
- jittor_core.ops.abs()¶
Document: *
Returns the absolute value of the input
x
.[in] x: the input jt.Var
- Example-1::
>>> jt.abs(jt.float32([-1, 0, 1])) jt.Var([1. 0. 1.], dtype=float32)
Declaration: VarHolder* abs(VarHolder* x)
- jittor_core.ops.acos()¶
Document: *
Returns the inverse cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.5876564 0.740723 -0.667666 0.5371753], dtype=float32) >>> jt.acos(x) jt.Var([0.9426371 0.7366504 2.3018656 1.0037117], dtype=float32) >>> x.acos() jt.Var([0.9426371 0.7366504 2.3018656 1.0037117], dtype=float32)
Declaration: VarHolder* acos(VarHolder* x)
- jittor_core.ops.acosh()¶
Document: *
Returns the inverse hyperbolic cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) + 1 >>> x jt.Var([1.3609099 1.8137748 1.1146184 1.3911307], dtype=float32) >>> jt.acosh(x) jt.Var([0.8259237 1.2020639 0.47432774 0.8579033 ], dtype=float32) >>> x.acosh() jt.Var([0.8259237 1.2020639 0.47432774 0.8579033 ], dtype=float32)
Declaration: VarHolder* acosh(VarHolder* x)
- jittor_core.ops.add()¶
Document: *
Element-wise adds
x
andy
and returns a new Var.This operation is equivalent to
x + y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* add(VarHolder* x, VarHolder* y)
- jittor_core.ops.all_()¶
Document: *
Tests if all elements in input evaluate to True.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(2, shape=(2, 3)) >>> x jt.Var([[1 1 1] [0 1 0]], dtype=int32) >>> jt.all_(x) jt.Var([False], dtype=int32) >>> x.all_() jt.Var([False], dtype=int32) >>> x.all_(dim=1) jt.Var([True False], dtype=int32) >>> x.all_(dim=1, keepdims=True) jt.Var([[True] [False]], dtype=int32)
Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.any_()¶
Document: *
Tests if any elements in input evaluate to True.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(2, shape=(2, 3)) >>> x jt.Var([[1 0 1] [0 0 0]], dtype=int32) >>> jt.any_(x) jt.Var([True], dtype=int32) >>> x.any_() jt.Var([True], dtype=int32) >>> x.any_(dim=1) jt.Var([True False], dtype=int32) >>> x.any_(dim=1, keepdims=True) jt.Var([[True] [False]], dtype=int32)
Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.arccos()¶
Document: *
Returns the inverse cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.5876564 0.740723 -0.667666 0.5371753], dtype=float32) >>> jt.acos(x) jt.Var([0.9426371 0.7366504 2.3018656 1.0037117], dtype=float32) >>> x.acos() jt.Var([0.9426371 0.7366504 2.3018656 1.0037117], dtype=float32)
Declaration: VarHolder* acos(VarHolder* x)
- jittor_core.ops.arccosh()¶
Document: *
Returns the inverse hyperbolic cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) + 1 >>> x jt.Var([1.3609099 1.8137748 1.1146184 1.3911307], dtype=float32) >>> jt.acosh(x) jt.Var([0.8259237 1.2020639 0.47432774 0.8579033 ], dtype=float32) >>> x.acosh() jt.Var([0.8259237 1.2020639 0.47432774 0.8579033 ], dtype=float32)
Declaration: VarHolder* acosh(VarHolder* x)
- jittor_core.ops.arcsin()¶
Document: *
Returns the arcsine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.09342023 -0.42522037 0.9264933 -0.785264 ], dtype=float32) >>> jt.asin(x) jt.Var([ 0.09355665 -0.43920535 1.1849847 -0.9031224 ], dtype=float32) >>> x.asin() jt.Var([ 0.09355665 -0.43920535 1.1849847 -0.9031224 ], dtype=float32)
Declaration: VarHolder* asin(VarHolder* x)
- jittor_core.ops.arcsinh()¶
Document: *
Returns the inverse hyperbolic sine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.9749726 -0.52341473 0.8906148 1.0338128 ], dtype=float32) >>> jt.asinh(x) jt.Var([-1.4323865 -0.5020559 0.8018747 0.90508187], dtype=float32) >>> x.asinh() jt.Var([-1.4323865 -0.5020559 0.8018747 0.90508187], dtype=float32)
Declaration: VarHolder* asinh(VarHolder* x)
- jittor_core.ops.arctan()¶
Document: *
Returns the inverse tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-0.85885596 1.187804 0.47249675 0.95933187], dtype=float32) >>> jt.atan(x) jt.Var([-0.70961297 0.87102956 0.44140393 0.76464504], dtype=float32) >>> x.atan() jt.Var([-0.70961297 0.87102956 0.44140393 0.76464504], dtype=float32)
Declaration: VarHolder* atan(VarHolder* x)
- jittor_core.ops.arctanh()¶
Document: *
Returns the inverse hyperbolic tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.9062414 -0.799802 -0.27219176 -0.7274077 ], dtype=float32) >>> jt.atanh(x) jt.Var([ 1.5060828 -1.0980625 -0.27922946 -0.9231999 ], dtype=float32) >>> x.atanh() jt.Var([ 1.5060828 -1.0980625 -0.27922946 -0.9231999 ], dtype=float32)
Declaration: VarHolder* atanh(VarHolder* x)
- jittor_core.ops.arg_reduce()¶
Document: *
Returns the indices of the maximum / minimum of the input across a dimension.
[in] x: the input jt.Var.
[in] op: “max” or “min”.
[in] dim: int. Specifies which dimension to be reduced.
[in] keepdims: bool. Whether the output has
dim
retained or not.
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 3)) >>> x jt.Var([[4 2 5] [6 7 1]], dtype=int32) >>> jt.arg_reduce(x, 'max', dim=1, keepdims=False) [jt.Var([2 1], dtype=int32), jt.Var([5 7], dtype=int32)] >>> jt.arg_reduce(x, 'min', dim=1, keepdims=False) [jt.Var([1 2], dtype=int32), jt.Var([2 1], dtype=int32)]
Declaration: vector_to_tuple<VarHolder*> arg_reduce(VarHolder* x, NanoString op, int dim, bool keepdims)
- jittor_core.ops.argsort()¶
Document: *
Argsort Operator Perform an indirect sort by given key or compare function.
x is input, y is output index, satisfy:
x[y[0]] <= x[y[1]] <= x[y[2]] <= … <= x[y[n]]
or
key(y[0]) <= key(y[1]) <= key(y[2]) <= … <= key(y[n])
or
compare(y[0], y[1]) && compare(y[1], y[2]) && …
[in] x: input var for sort
[in] dim: sort alone which dim
[in] descending: the elements are sorted in descending order or not(default False).
[in] dtype: type of return indexes
[out] index: index have the same size with sorted dim
[out] value: sorted value
Example:
index, value = jt.argsort([11,13,12]) # return [0 2 1], [11 12 13] index, value = jt.argsort([11,13,12], descending=True) # return [1 2 0], [13 12 11] index, value = jt.argsort([[11,13,12], [12,11,13]]) # return [[0 2 1],[1 0 2]], [[11 12 13],[11 12 13]] index, value = jt.argsort([[11,13,12], [12,11,13]], dim=0) # return [[0 1 0],[1 0 1]], [[11 11 12],[12 13 13]]
Declaration: vector_to_tuple<VarHolder*> argsort(VarHolder* x, int dim=-1, bool descending=false, NanoString dtype=ns_int32)
- jittor_core.ops.array()¶
Declaration: VarHolder* array__(PyObject* obj)
- jittor_core.ops.array_()¶
Declaration: VarHolder* array_(ArrayArgs&& args)
- jittor_core.ops.asin()¶
Document: *
Returns the arcsine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.09342023 -0.42522037 0.9264933 -0.785264 ], dtype=float32) >>> jt.asin(x) jt.Var([ 0.09355665 -0.43920535 1.1849847 -0.9031224 ], dtype=float32) >>> x.asin() jt.Var([ 0.09355665 -0.43920535 1.1849847 -0.9031224 ], dtype=float32)
Declaration: VarHolder* asin(VarHolder* x)
- jittor_core.ops.asinh()¶
Document: *
Returns the inverse hyperbolic sine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.9749726 -0.52341473 0.8906148 1.0338128 ], dtype=float32) >>> jt.asinh(x) jt.Var([-1.4323865 -0.5020559 0.8018747 0.90508187], dtype=float32) >>> x.asinh() jt.Var([-1.4323865 -0.5020559 0.8018747 0.90508187], dtype=float32)
Declaration: VarHolder* asinh(VarHolder* x)
- jittor_core.ops.atan()¶
Document: *
Returns the inverse tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-0.85885596 1.187804 0.47249675 0.95933187], dtype=float32) >>> jt.atan(x) jt.Var([-0.70961297 0.87102956 0.44140393 0.76464504], dtype=float32) >>> x.atan() jt.Var([-0.70961297 0.87102956 0.44140393 0.76464504], dtype=float32)
Declaration: VarHolder* atan(VarHolder* x)
- jittor_core.ops.atanh()¶
Document: *
Returns the inverse hyperbolic tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.9062414 -0.799802 -0.27219176 -0.7274077 ], dtype=float32) >>> jt.atanh(x) jt.Var([ 1.5060828 -1.0980625 -0.27922946 -0.9231999 ], dtype=float32) >>> x.atanh() jt.Var([ 1.5060828 -1.0980625 -0.27922946 -0.9231999 ], dtype=float32)
Declaration: VarHolder* atanh(VarHolder* x)
- jittor_core.ops.binary()¶
Declaration: VarHolder* binary(VarHolder* x, VarHolder* y, NanoString p)
- jittor_core.ops.bitwise_and()¶
Document: *
Computes the bitwise AND of x and y.
[in] x: the first input, jt.Var (integal or boolean).
[in] y: the second input, jt.Var (integal or boolean).
Declaration: VarHolder* bitwise_and(VarHolder* x, VarHolder* y)
- jittor_core.ops.bitwise_not()¶
Document: *
Returns the bitwise NOT of the input
x
.[in] x: the input jt.Var, integal or boolean.
- Example-1::
>>> jt.bitwise_not(jt.int32([1, 2, -3])) jt.Var([-2 -3 2], dtype=int32)
Declaration: VarHolder* bitwise_not(VarHolder* x)
- jittor_core.ops.bitwise_or()¶
Document: *
Computes the bitwise OR of x and y.
[in] x: the first input, jt.Var (integal or boolean).
[in] y: the second input, jt.Var (integal or boolean).
Declaration: VarHolder* bitwise_or(VarHolder* x, VarHolder* y)
- jittor_core.ops.bitwise_xor()¶
Document: *
Computes the bitwise XOR of x and y.
[in] x: the first input, jt.Var (integal or boolean).
[in] y: the second input, jt.Var (integal or boolean).
Declaration: VarHolder* bitwise_xor(VarHolder* x, VarHolder* y)
- jittor_core.ops.bool()¶
Document: *
Returns a copy of the input var, casted to boolean.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.arange(3) >>> x jt.Var([0 1 2], dtype=int32) >>> x.bool() jt.Var([False True True], dtype=bool) >>> jt.bool(x) jt.Var([False True True], dtype=bool)
Declaration: VarHolder* bool_(VarHolder* x)
- jittor_core.ops.broadcast()¶
Document: *
Broadcast
x
to a given shape.[in] x: the input jt.Var.
[in] shape: the output shape.
[in] dims: specifies the new dimension in the output shape, an integer array.
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 2)) >>> x jt.Var([[8 1] [7 6]], dtype=int32) >>> jt.broadcast(x, shape=(2, 3, 2), dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32)
Declaration: VarHolder* broadcast_to(VarHolder* x, NanoVector shape, NanoVector dims=NanoVector())
Document: *
Broadcast
x
to the same shape asy
.[in] x: the input jt.Var.
[in] y: the reference jt.Var.
[in] dims: specifies the new dimension in the output shape, an integer array.
注解
jt.broadcast_var(x, y, dims) is an alias of jt.broadcast(x, y, dims)
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 2)) >>> x jt.Var([[8 1] [7 6]], dtype=int32) >>> y = jt.randint(0, 10, shape=(2, 3, 2)) >>> jt.broadcast(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32) >>> jt.broadcast_var(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32)
Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
- jittor_core.ops.broadcast_var()¶
Document: *
Broadcast
x
to the same shape asy
.[in] x: the input jt.Var.
[in] y: the reference jt.Var.
[in] dims: specifies the new dimension in the output shape, an integer array.
注解
jt.broadcast_var(x, y, dims) is an alias of jt.broadcast(x, y, dims)
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 2)) >>> x jt.Var([[8 1] [7 6]], dtype=int32) >>> y = jt.randint(0, 10, shape=(2, 3, 2)) >>> jt.broadcast(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32) >>> jt.broadcast_var(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32)
Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
- jittor_core.ops.candidate()¶
Document: *
Candidate Operator Perform an indirect candidate filter by given a fail condition.
x is input, y is output index, satisfy:
not fail_cond(y[0], y[1]) and not fail_cond(y[0], y[2]) and not fail_cond(y[1], y[2]) and ... ... and not fail_cond(y[m-2], y[m-1])
Where m is number of selected candidates.
Pseudo code:
y = [] for i in range(n): pass = True for j in y: if (@fail_cond): pass = false break if (pass): y.append(i) return y
[in] x: input var for filter
[in] fail_cond: code for fail condition
[in] dtype: type of return indexes
[out] index: .
Example:
jt.candidate(jt.random(100,2), '(@x(j,0)>@x(i,0))or(@x(j,1)>@x(i,1))') # return y satisfy: # x[y[0], 0] <= x[y[1], 0] and x[y[1], 0] <= x[y[2], 0] and ... and x[y[m-2], 0] <= x[y[m-1], 0] and # x[y[0], 1] <= x[y[1], 1] and x[y[1], 1] <= x[y[2], 1] and ... and x[y[m-2], 1] <= x[y[m-1], 1]
Declaration: VarHolder* candidate(VarHolder* x, string&& fail_cond, NanoString dtype=ns_int32)
- jittor_core.ops.cast()¶
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
- jittor_core.ops.ceil()¶
Document: *
Returns the smallest integer greater than or equal to the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.0339162 -0.7259972 -0.9220003 -0.8449701], dtype=float32) >>> jt.ceil(x) jt.Var([-1.0 0.0 0.0 0.0], dtype=float32) >>> x.ceil() jt.Var([-1.0 0.0 0.0 0.0], dtype=float32)
Declaration: VarHolder* ceil(VarHolder* x)
- jittor_core.ops.ceil_int()¶
Document: *
Returns the smallest integer greater than or equal to the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.0339162 -0.7259972 -0.9220003 -0.8449701], dtype=float32) >>> jt.ceil_int(x) jt.Var([-1 0 0 0], dtype=int32) >>> x.ceil_int() jt.Var([-1 0 0 0], dtype=int32)
Declaration: VarHolder* ceil_int(VarHolder* x)
- jittor_core.ops.clone()¶
Declaration: VarHolder* clone(VarHolder* x)
- jittor_core.ops.code()¶
Document: *
Code Operator for easily customized op.
[in] shape: the output shape, a integer array
[in] dtype: the output data type
[in] inputs: A list of input jittor Vars
[in] cpu_src: cpu source code string, buildin value:
in{x}, in{x}_shape{y}, in{x}_stride{y}, in{x}_type, in{x}_p, @in0(…)
out{x}, out{x}_shape{y}, out{x}_stride{y}, out{x}_type, out{x}_p, @out0(…)
out, out_shape{y}, out_stride{y}, out_type, out_p, @out(…)
[in] cpu_header: cpu header code string.
[in] cuda_src: cuda source code string.
[in] cuda_header: cuda header code string.
Example-1:
from jittor import Function import jittor as jt class Func(Function): def execute(self, x): self.save_vars = x return jt.code(x.shape, x.dtype, [x], cpu_src=''' for (int i=0; i<in0_shape0; i++) @out(i) = @in0(i)*@in0(i)*2; ''') def grad(self, grad_x): x = self.save_vars return jt.code(x.shape, x.dtype, [x, grad_x], cpu_src=''' for (int i=0; i<in0_shape0; i++) @out(i) = @in1(i)*@in0(i)*4; ''') a = jt.random([10]) func = Func() b = func(a) print(b) print(jt.grad(b,a))
Example-2:
a = jt.array([3,2,1]) b = jt.code(a.shape, a.dtype, [a], cpu_header=""" #include <algorithm> @alias(a, in0) @alias(b, out) """, cpu_src=""" for (int i=0; i<a_shape0; i++) @b(i) = @a(i); std::sort(&@b(0), &@b(in0_shape0)); """ ) assert (b.data==[1,2,3]).all()
Example-3:
#This example shows how to set multiple outputs in code op. a = jt.array([3,2,1]) b,c = jt.code([(1,), (1,)], [a.dtype, a.dtype], [a], cpu_header=""" #include <iostream> using namespace std; """, cpu_src=""" @alias(a, in0) @alias(b, out0) @alias(c, out1) @b(0) = @c(0) = @a(0); for (int i=0; i<a_shape0; i++) { @b(0) = std::min(@b(0), @a(i)); @c(0) = std::max(@c(0), @a(i)); } cout << "min:" << @b(0) << " max:" << @c(0) << endl; """ ) assert b.data == 1, b assert c.data == 3, c
Example-4:
#This example shows how to use dynamic shape of jittor variables. a = jt.array([5,-4,3,-2,1]) # negtive shape for max size of vary dimension b,c = jt.code([(-5,), (-5,)], [a.dtype, a.dtype], [a], cpu_src=""" @alias(a, in0) @alias(b, out0) @alias(c, out1) int num_b=0, num_c=0; for (int i=0; i<a_shape0; i++) { if (@a(i)>0) @b(num_b++) = @a(i); else @c(num_c++) = @a(i); } b->set_shape({num_b}); c->set_shape({num_c}); """ ) assert (b.data == [5,3,1]).all() assert (c.data == [-4,-2]).all()
Example-5:
# This example shows how to customize code op # compilation flags, such as add include search # path, add definitions, or any command line options a = jt.random([10]) b = jt.code(a.shape, a.dtype, [a], cpu_src=''' @out0(0) = HAHAHA; ''') # HAHAHA is defined in flags below # /any/include/path can be change to any path you want to include b.compile_options = {"FLAGS: -DHAHAHA=233 -I/any/include/path ": 1} print(b[0]) # will output 233
Example-6:
# This example shows how to pass custom data # into code op kernel without kernel recompiling. # In this example, the data {"x":123} canbe vary # and kernel will not recompile. # NOTE: the data type pass into kernel is float64 # cast to int if you want a = jt.code([1], "float32", inputs=[], data = {"x":123}, cpu_src=''' @out0(0) = data["x"]; ''').sync() assert a.item() == 123
CUDA Example-1:
#This example shows how to use CUDA in code op. import jittor as jt from jittor import Function jt.flags.use_cuda = 1 class Func(Function): def execute(self, a, b): self.save_vars = a, b return jt.code(a.shape, a.dtype, [a,b], cuda_src=''' __global__ static void kernel1(@ARGS_DEF) { @PRECALC int i = threadIdx.x + blockIdx.x * blockDim.x; int stride = blockDim.x * gridDim.x; for (; i<in0_shape0; i+=stride) @out(i) = @in0(i)*@in1(i); } kernel1<<<(in0_shape0-1)/1024+1, 1024>>>(@ARGS); ''') def grad(self, grad): a, b = self.save_vars return jt.code([a.shape, b.shape], [a.dtype, b.dtype], [a, b, grad], cuda_src=''' __global__ static void kernel2(@ARGS_DEF) { @PRECALC int i = threadIdx.x + blockIdx.x * blockDim.x; int stride = blockDim.x * gridDim.x; for (; i<in0_shape0; i+=stride) { @out0(i) = @in2(i)*@in1(i); @out1(i) = @in2(i)*@in0(i); } } kernel2<<<(in0_shape0-1)/1024+1, 1024>>>(@ARGS); ''') a = jt.random([100000]) b = jt.random([100000]) func = Func() c = func(a,b) print(c) print(jt.grad(c, [a, b]))
CUDA Example-2:
#This example shows how to use multi dimension data with CUDA. import jittor as jt from jittor import Function jt.flags.use_cuda = 1 class Func(Function): def execute(self, a, b): self.save_vars = a, b return jt.code(a.shape, a.dtype, [a,b], cuda_src=''' __global__ static void kernel1(@ARGS_DEF) { @PRECALC for (int i=blockIdx.x; i<in0_shape0; i+=gridDim.x) for (int j=threadIdx.x; j<in0_shape1; j+=blockDim.x) @out(i,j) = @in0(i,j)*@in1(i,j); } kernel1<<<32, 32>>>(@ARGS); ''') def grad(self, grad): a, b = self.save_vars return jt.code([a.shape, b.shape], [a.dtype, b.dtype], [a, b, grad], cuda_src=''' __global__ static void kernel2(@ARGS_DEF) { @PRECALC for (int i=blockIdx.x; i<in0_shape0; i+=gridDim.x) for (int j=threadIdx.x; j<in0_shape1; j+=blockDim.x) { @out0(i,j) = @in2(i,j)*@in1(i,j); @out1(i,j) = @in2(i,j)*@in0(i,j); } } kernel2<<<32, 32>>>(@ARGS); ''') a = jt.random((100,100)) b = jt.random((100,100)) func = Func() c = func(a,b) print(c) print(jt.grad(c, [a, b]))
Declaration: VarHolder* code(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs={}, string&& cpu_src=””, vector<string>&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector<string>&& cuda_grad_src={}, string&& cuda_header=””, DataMap&& data={})
Declaration: vector_to_tuple<VarHolder*> code_(vector<NanoVector>&& shapes, vector<NanoString>&& dtypes, vector<VarHolder*>&& inputs={}, string&& cpu_src=””, vector<string>&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector<string>&& cuda_grad_src={}, string&& cuda_header=””, DataMap&& data={})
Declaration: vector_to_tuple<VarHolder*> code__(vector<VarHolder*>&& inputs, vector<VarHolder*>&& outputs, string&& cpu_src=””, vector<string>&& cpu_grad_src={}, string&& cpu_header=””, string&& cuda_src=””, vector<string>&& cuda_grad_src={}, string&& cuda_header=””, DataMap&& data={})
- jittor_core.ops.copy()¶
Declaration: VarHolder* copy(VarHolder* x)
- jittor_core.ops.cos()¶
Document: *
Returns the cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.cos(x) jt.Var([ 0.9463862 0.7575426 0.6429972 -0.2273323], dtype=float32) >>> x.cos() jt.Var([ 0.9463862 0.7575426 0.6429972 -0.2273323], dtype=float32)
Declaration: VarHolder* cos(VarHolder* x)
- jittor_core.ops.cosh()¶
Document: *
Returns the hyperbolic cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.cosh(x) jt.Var([1.0545894 1.2637873 1.405288 3.1078668], dtype=float32) >>> x.cosh() jt.Var([1.0545894 1.2637873 1.405288 3.1078668], dtype=float32)
Declaration: VarHolder* cosh(VarHolder* x)
- jittor_core.ops.div()¶
Document: *
Element-wise divide
x
byy
and returns a new Var.This operation is equivalent to
x / y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
- Example-1::
>>> a = jt.empty((3,), dtype=jt.int32) >>> a jt.Var([707406378 707406378 707406378], dtype=int32) >>> b = jt.empty((3,), dtype=jt.int32) >>> b jt.Var([674510453 171649398 538976288], dtype=int32) >>> jt.divide(a, b) jt.Var([1.0487701 4.1212287 1.3125001], dtype=float32) >>> a / b jt.Var([1.0487701 4.1212287 1.3125001], dtype=float32)
returns float value even if the dtype of input Vars are both integers. @see jt.ops.floor_divide() for floor division.
Declaration: VarHolder* divide(VarHolder* x, VarHolder* y)
- jittor_core.ops.divide()¶
Document: *
Element-wise divide
x
byy
and returns a new Var.This operation is equivalent to
x / y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
- Example-1::
>>> a = jt.empty((3,), dtype=jt.int32) >>> a jt.Var([707406378 707406378 707406378], dtype=int32) >>> b = jt.empty((3,), dtype=jt.int32) >>> b jt.Var([674510453 171649398 538976288], dtype=int32) >>> jt.divide(a, b) jt.Var([1.0487701 4.1212287 1.3125001], dtype=float32) >>> a / b jt.Var([1.0487701 4.1212287 1.3125001], dtype=float32)
returns float value even if the dtype of input Vars are both integers. @see jt.ops.floor_divide() for floor division.
Declaration: VarHolder* divide(VarHolder* x, VarHolder* y)
- jittor_core.ops.empty()¶
Declaration: VarHolder* empty(NanoVector shape, NanoString dtype=ns_float32)
- jittor_core.ops.equal()¶
Document: *
Returns
x == y
element-wise.This operation is equivalent to
x == y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* equal(VarHolder* x, VarHolder* y)
- jittor_core.ops.erf()¶
Document: *
Computes the error function of each element. The error function is defined as follows:
\[erf(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} dt\][in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.49443012 0.4305426 -1.0364404 -1.2628382 ], dtype=float32) >>> jt.erf(x) jt.Var([ 0.51559156 0.45739546 -0.85728306 -0.9258883 ], dtype=float32) >>> x.erf() jt.Var([ 0.51559156 0.45739546 -0.85728306 -0.9258883 ], dtype=float32)
Declaration: VarHolder* erf(VarHolder* x)
- jittor_core.ops.erfinv()¶
Document: *
Computes the inverse error function of each element.
[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.00277209 -0.26642472 0.7869792 0.5415418 ], dtype=float32) >>> jt.erfinv(x) jt.Var([ 0.00245671 -0.24068035 0.8805613 0.5242405 ], dtype=float32) >>> x.erfinv() jt.Var([ 0.00245671 -0.24068035 0.8805613 0.5242405 ], dtype=float32)
Declaration: VarHolder* erfinv(VarHolder* x)
- jittor_core.ops.exp()¶
Document: *
Returns the exponential of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 >>> x jt.Var([1.9841381 1.4103996 0.5855549 1.4212812], dtype=float32) >>> jt.exp(x) jt.Var([7.2727766 4.0975924 1.7959872 4.1424246], dtype=float32) >>> x.exp() jt.Var([7.2727766 4.0975924 1.7959872 4.1424246], dtype=float32)
Declaration: VarHolder* exp(VarHolder* x)
- jittor_core.ops.fetch()¶
Declaration: VarHolder* fetch(vector<VarHolder*>&& inputs, FetchFunc&& func)
- jittor_core.ops.float16()¶
Document: *
Returns a copy of the input var, casted to float16 (half-precision float).
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.half() jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> jt.half(x) jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> x.float16() jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> jt.float16(x) jt.Var([4.094 2.008 8.48 ], dtype=float16)
Declaration: VarHolder* float16_(VarHolder* x)
- jittor_core.ops.float32()¶
Document: *
Returns a copy of the input var, casted to float32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.arange(3) >>> x jt.Var([0 1 2], dtype=int32) >>> x.float() jt.Var([0. 1. 2.], dtype=float32) >>> jt.float(x) jt.Var([0. 1. 2.], dtype=float32) >>> x.float32() jt.Var([0. 1. 2.], dtype=float32) >>> jt.float32(x) jt.Var([0. 1. 2.], dtype=float32)
Declaration: VarHolder* float32_(VarHolder* x)
- jittor_core.ops.float64()¶
Document: *
Returns a copy of the input var, casted to float64 (double-precision float).
[in] x: the input jt.Var
- Example-1::
>>> x = jt.arange(3) >>> x jt.Var([0 1 2], dtype=int32) >>> x.double() jt.Var([0. 1. 2.], dtype=float64) >>> jt.double(x) jt.Var([0. 1. 2.], dtype=float64) >>> x.float64() jt.Var([0. 1. 2.], dtype=float64) >>> jt.float64(x) jt.Var([0. 1. 2.], dtype=float64)
Declaration: VarHolder* float64_(VarHolder* x)
- jittor_core.ops.floor()¶
Document: *
Returns the largest integer less than or equal to the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.0339162 -0.7259972 -0.9220003 -0.8449701], dtype=float32) >>> jt.floor(x) jt.Var([-2.0 -1.0 -1.0 -1.0], dtype=float32) >>> x.floor jt.Var([-2.0 -1.0 -1.0 -1.0], dtype=float32)
Declaration: VarHolder* floor(VarHolder* x)
- jittor_core.ops.floor_divide()¶
Document: *
Element-wise divide
x
byy
and returns the floor of the result.This operation is equivalent to
x // y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
- Example-1::
>>> a = jt.randint(1, 10, (3,), dtype=jt.int32) >>> a jt.Var([9 2 7], dtype=int32) >>> b = jt.randint(1, 10, (3,), dtype=jt.int32) >>> b jt.Var([6 4 6], dtype=int32) >>> jt.floor_divide(a, b) jt.Var([1 0 1], dtype=int32) >>> a // b jt.Var([1 0 1], dtype=int32)
Declaration: VarHolder* floor_divide(VarHolder* x, VarHolder* y)
- jittor_core.ops.floor_int()¶
Document: *
Returns the largest integer less than or equal to the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.0339162 -0.7259972 -0.9220003 -0.8449701], dtype=float32) >>> jt.floor_int(x) jt.Var([-2 -1 -1 -1], dtype=int32) >>> x.floor_int jt.Var([-2 -1 -1 -1], dtype=int32)
Declaration: VarHolder* floor_int(VarHolder* x)
- jittor_core.ops.fuse_transpose()¶
Declaration: VarHolder* fuse_transpose(VarHolder* x, NanoVector axes=NanoVector())
- jittor_core.ops.getitem()¶
Declaration: VarHolder* getitem(VarHolder* x, VarSlices&& slices)
Declaration: vector_to_tuple<VarHolder*> getitem_(VarHolder* x, VarSlices&& slices, int _)
- jittor_core.ops.greater()¶
Document: *
Returns
x > y
element-wise.This operation is equivalent to
x > y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* greater(VarHolder* x, VarHolder* y)
- jittor_core.ops.greater_equal()¶
Document: *
Returns
x >= y
element-wise.This operation is equivalent to
x >= y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* greater_equal(VarHolder* x, VarHolder* y)
- jittor_core.ops.index()¶
Document: *
Index Operator generate index of shape.
It performs equivalent Python-pseudo implementation below:
n = len(shape)-1 x = np.zeros(shape, dtype) for i0 in range(shape[0]): # 1-st loop for i1 in range(shape[1]): # 2-nd loop ...... # many loops for in in range(shape[n]) # n+1 -th loop x[i0,i1,...,in] = i@dim
[in] shape: the output shape, a integer array
[in] dim: the dim of the index.
[in] dtype: the data type string, default int32
Example:
print(jt.index([2,2], 0)) # output: [[0,0],[1,1]] print(jt.index([2,2], 1)) # output: [[0,1],[0,1]]
Declaration: VarHolder* index(NanoVector shape, int64 dim, NanoString dtype=ns_int32)
Declaration: vector_to_tuple<VarHolder*> index_(NanoVector shape, NanoString dtype=ns_int32)
Document: * shape dependency version of index op
jt.index_var(a, 1) similar with jt.index(a.shape, 1)
Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)
Document: * shape dependency version of index op
jt.index_var(a) similar with jt.index(a.shape)
Declaration: vector_to_tuple<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)
- jittor_core.ops.index_var()¶
Document: * shape dependency version of index op
jt.index_var(a, 1) similar with jt.index(a.shape, 1)
Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)
Document: * shape dependency version of index op
jt.index_var(a) similar with jt.index(a.shape)
Declaration: vector_to_tuple<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)
- jittor_core.ops.int16()¶
Document: *
Returns a copy of the input var, casted to int16.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int16() jt.Var([4 2 8], dtype=int16) >>> jt.int16(x) jt.Var([4 2 8], dtype=int16)
Declaration: VarHolder* int16_(VarHolder* x)
- jittor_core.ops.int32()¶
Document: *
Returns a copy of the input var, casted to int32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int() jt.Var([4 2 8], dtype=int32) >>> jt.int(x) jt.Var([4 2 8], dtype=int32) >>> x.int32() jt.Var([4 2 8], dtype=int32) >>> jt.int32(x) jt.Var([4 2 8], dtype=int32) >>> x.long() jt.Var([4 2 8], dtype=int32) >>> jt.long(x) jt.Var([4 2 8], dtype=int32)
Declaration: VarHolder* int32_(VarHolder* x)
- jittor_core.ops.int64()¶
Document: *
Returns a copy of the input var, casted to int64.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int64() jt.Var([4 2 8], dtype=int64) >>> jt.int64(x) jt.Var([4 2 8], dtype=int64)
Declaration: VarHolder* int64_(VarHolder* x)
- jittor_core.ops.int8()¶
Document: *
Returns a copy of the input var, casted to int8.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int8() jt.Var([4 2 8], dtype=int8) >>> jt.int8(x) jt.Var([4 2 8], dtype=int8)
Declaration: VarHolder* int8_(VarHolder* x)
- jittor_core.ops.left_shift()¶
Document: *
Shifts the bits of
x
to the left byy
.Bits are shifted to the left by appending
y
0s at the right ofx
. This operation is equivalent tox << y
.[in] x: the first input, a python number or jt.Var (int32 or int64).
[in] y: the second input, a python number or jt.Var (int32 or int64).
- Example-1::
>>> a = jt.randint(0, 10, shape=(3,)) >>> a jt.Var([7 6 7], dtype=int32) >>> b = jt.randint(0, 10, shape=(3,)) >>> b jt.Var([3 9 8], dtype=int32) >>> jt.left_shift(a, b) jt.Var([ 56 3072 1792], dtype=int32) >>> a << b jt.Var([ 56 3072 1792], dtype=int32)
Declaration: VarHolder* left_shift(VarHolder* x, VarHolder* y)
- jittor_core.ops.less()¶
Document: *
Returns
x < y
element-wise.This operation is equivalent to
x < y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* less(VarHolder* x, VarHolder* y)
- jittor_core.ops.less_equal()¶
Document: *
Returns
x <= y
element-wise.This operation is equivalent to
x <= y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* less_equal(VarHolder* x, VarHolder* y)
- jittor_core.ops.log()¶
Document: *
Returns the natural logarithm of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 >>> x jt.Var([0.02863695 1.30122 1.6048753 1.140261 ], dtype=float32) >>> jt.log(x) jt.Var([-3.5530574 0.26330233 0.47304606 0.13125724], dtype=float32) >>> x.log() jt.Var([-3.5530574 0.26330233 0.47304606 0.13125724], dtype=float32)
Declaration: VarHolder* log(VarHolder* x)
- jittor_core.ops.logical_and()¶
Document: *
Returns the element-wise logical AND of the inputs.
[in] x: the first input, jt.Var.
[in] y: the second input, jt.Var.
Declaration: VarHolder* logical_and(VarHolder* x, VarHolder* y)
- jittor_core.ops.logical_not()¶
Document: *
Returns the logical NOT of the input
x
.[in] x: the input jt.Var, integal or boolean.
- Example-1::
>>> jt.logical_not(jt.int32([-1, 0, 1])) jt.Var([False True False], dtype=bool)
Declaration: VarHolder* logical_not(VarHolder* x)
- jittor_core.ops.logical_or()¶
Document: *
Returns the element-wise logical OR of the inputs.
[in] x: the first input, jt.Var.
[in] y: the second input, jt.Var.
Declaration: VarHolder* logical_or(VarHolder* x, VarHolder* y)
- jittor_core.ops.logical_xor()¶
Document: *
Returns the element-wise logical XOR of the inputs.
[in] x: the first input, jt.Var.
[in] y: the second input, jt.Var.
Declaration: VarHolder* logical_xor(VarHolder* x, VarHolder* y)
- jittor_core.ops.max()¶
Document: *
Returns the maximum elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.max(x) jt.Var([4], dtype=int32) >>> x.max() jt.Var([4], dtype=int32) >>> x.max(dim=1) jt.Var([4 4], dtype=int32) >>> x.max(dim=1, keepdims=True) jt.Var([[4] [4]], dtype=int32)
Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.maximum()¶
Document: *
Returns the element-wise maximum of
x
andy
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* maximum(VarHolder* x, VarHolder* y)
- jittor_core.ops.mean()¶
Document: *
Returns the mean value of the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[9 4 4] [1 9 6]], dtype=int32) >>> jt.mean(x) jt.Var([5.5000005], dtype=float32) >>> x.mean() jt.Var([5.5000005], dtype=float32) >>> x.mean(dim=1) jt.Var([5.666667 5.3333335], dtype=float32) >>> x.mean(dim=1, keepdims=True) jt.Var([[5.666667 ] [5.3333335]], dtype=float32)
Declaration: VarHolder* reduce_mean(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_mean_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_mean__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.min()¶
Document: *
Returns the minimum elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.min(x) jt.Var([0], dtype=int32) >>> x.min() jt.Var([0], dtype=int32) >>> x.min(dim=1) jt.Var([1 0], dtype=int32) >>> x.min(dim=1, keepdims=True) jt.Var([[1] [0]], dtype=int32)
Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.minimum()¶
Document: *
Returns the element-wise minimum of
x
andy
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* minimum(VarHolder* x, VarHolder* y)
- jittor_core.ops.mod()¶
Document: *
Returns the element-wise remainder of division.
This operation is equivalent to
x % y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
- Example-1::
>>> a = jt.rand(3) >>> a jt.Var([0.3989529 0.20159635 0.22973768], dtype=float32) >>> b = jt.rand(3) >>> b jt.Var([0.20121202 0.7704864 0.5654395 ], dtype=float32) >>> jt.mod(a, b) jt.Var([0.19774088 0.20159635 0.22973768], dtype=float32) >>> a % b jt.Var([0.19774088 0.20159635 0.22973768], dtype=float32)
Declaration: VarHolder* mod(VarHolder* x, VarHolder* y)
- jittor_core.ops.mul()¶
Document: *
Element-wise muliplies
x
withy
and returns a new Var.This operation is equivalent to
x * y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* multiply(VarHolder* x, VarHolder* y)
- jittor_core.ops.multiply()¶
Document: *
Element-wise muliplies
x
withy
and returns a new Var.This operation is equivalent to
x * y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* multiply(VarHolder* x, VarHolder* y)
- jittor_core.ops.negative()¶
Document: *
Returns the negative value of the input
x
.This operator is equavilant to
-x
.[in] x: the input jt.Var.
- Example-1::
>>> jt.negative(jt.float32([-1, 0, 1])) jt.Var([ 1. -0. -1.], dtype=float32)
Declaration: VarHolder* negative(VarHolder* x)
- jittor_core.ops.not_equal()¶
Document: *
Returns
x != y
element-wise.This operation is equivalent to
x != y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* not_equal(VarHolder* x, VarHolder* y)
- jittor_core.ops.numpy_code()¶
Document: *
Numpy Code Operator for easily customized op.
[in] shape: the output shape, a integer array
[in] dtype: the output data type
[in] inputs: A list of input jittor Vars
[in] forward: function, represents forward python function
[in] backward: A list of function, represents gradiant for each input
Example-1:
def forward_code(np, data): a = data["inputs"][0] b = data["outputs"][0] np.add(a,a,out=b) def backward_code(np, data): dout = data["dout"] out = data["outputs"][0] np.copyto(out, dout*2.0) a = jt.random((5,1)) b = jt.numpy_code( a.shape, a.dtype, [a], forward_code, [backward_code], )
Example-2:
def forward_code(np, data): a,b = data["inputs"] c,d = data["outputs"] np.add(a,b,out=c) np.subtract(a,b,out=d) def backward_code1(np, data): dout = data["dout"] out = data["outputs"][0] np.copyto(out, dout) def backward_code2(np, data): dout = data["dout"] out_index = data["out_index"] out = data["outputs"][0] if out_index==0: np.copyto(out, dout) else: np.negative(dout, out) a = jt.random((5,1)) b = jt.random((5,1)) c, d = jt.numpy_code( [a.shape, a.shape], [a.dtype, a.dtype], [a, b], forward_code, [backward_code1,backward_code2], )
Declaration: VarHolder* numpy_code(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs, NumpyFunc&& forward, vector<NumpyFunc>&& backward)
Declaration: vector_to_tuple<VarHolder*> numpy_code_(vector<NanoVector>&& shapes, vector<NanoString>&& dtypes, vector<VarHolder*>&& inputs, NumpyFunc&& forward, vector<NumpyFunc>&& backward)
Declaration: VarHolder* numpy_code__(NanoVector shape, NanoString dtype, vector<VarHolder*>&& inputs, NumpyFunc&& forward)
Declaration: vector_to_tuple<VarHolder*> numpy_code___(vector<NanoVector>&& shapes, vector<NanoString>&& dtypes, vector<VarHolder*>&& inputs, NumpyFunc&& forward)
- jittor_core.ops.pow()¶
Document: *
Computes
x^y
, element-wise.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* pow(VarHolder* x, VarHolder* y)
- jittor_core.ops.prod()¶
Document: *
Returns the product of all the elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[7 5 5] [5 7 5]], dtype=int32) >>> jt.prod(x) jt.Var([30625], dtype=int32) >>> x.prod() jt.Var([30625], dtype=int32) >>> x.prod(dim=1) jt.Var([175 175], dtype=int32) >>> x.prod(dim=1, keepdims=True) jt.Var([[175] [175]], dtype=int32)
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.product()¶
Document: *
Returns the product of all the elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[7 5 5] [5 7 5]], dtype=int32) >>> jt.prod(x) jt.Var([30625], dtype=int32) >>> x.prod() jt.Var([30625], dtype=int32) >>> x.prod(dim=1) jt.Var([175 175], dtype=int32) >>> x.prod(dim=1, keepdims=True) jt.Var([[175] [175]], dtype=int32)
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.random()¶
Declaration: VarHolder* random(NanoVector shape, NanoString dtype=ns_float32, NanoString type=ns_uniform)
- jittor_core.ops.reduce()¶
Declaration: VarHolder* reduce(VarHolder* x, NanoString op, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_(VarHolder* x, NanoString op, NanoVector dims=NanoVector(), bool keepdims=false)
- jittor_core.ops.reduce_add()¶
Document: *
Returns the sum of the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.sum(x) jt.Var([13], dtype=int32) >>> x.sum() jt.Var([13], dtype=int32) >>> x.sum(dim=1) jt.Var([7 6], dtype=int32) >>> x.sum(dim=1, keepdims=True) jt.Var([[7] [6]], dtype=int32)
Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_bitwise_and()¶
Declaration: VarHolder* reduce_bitwise_and(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_bitwise_or()¶
Declaration: VarHolder* reduce_bitwise_or(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_bitwise_xor()¶
Declaration: VarHolder* reduce_bitwise_xor(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_logical_and()¶
Document: *
Tests if all elements in input evaluate to True.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(2, shape=(2, 3)) >>> x jt.Var([[1 1 1] [0 1 0]], dtype=int32) >>> jt.all_(x) jt.Var([False], dtype=int32) >>> x.all_() jt.Var([False], dtype=int32) >>> x.all_(dim=1) jt.Var([True False], dtype=int32) >>> x.all_(dim=1, keepdims=True) jt.Var([[True] [False]], dtype=int32)
Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_logical_or()¶
Document: *
Tests if any elements in input evaluate to True.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(2, shape=(2, 3)) >>> x jt.Var([[1 0 1] [0 0 0]], dtype=int32) >>> jt.any_(x) jt.Var([True], dtype=int32) >>> x.any_() jt.Var([True], dtype=int32) >>> x.any_(dim=1) jt.Var([True False], dtype=int32) >>> x.any_(dim=1, keepdims=True) jt.Var([[True] [False]], dtype=int32)
Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_logical_xor()¶
Declaration: VarHolder* reduce_logical_xor(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_maximum()¶
Document: *
Returns the maximum elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.max(x) jt.Var([4], dtype=int32) >>> x.max() jt.Var([4], dtype=int32) >>> x.max(dim=1) jt.Var([4 4], dtype=int32) >>> x.max(dim=1, keepdims=True) jt.Var([[4] [4]], dtype=int32)
Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_minimum()¶
Document: *
Returns the minimum elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.min(x) jt.Var([0], dtype=int32) >>> x.min() jt.Var([0], dtype=int32) >>> x.min(dim=1) jt.Var([1 0], dtype=int32) >>> x.min(dim=1, keepdims=True) jt.Var([[1] [0]], dtype=int32)
Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reduce_multiply()¶
Document: *
Returns the product of all the elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[7 5 5] [5 7 5]], dtype=int32) >>> jt.prod(x) jt.Var([30625], dtype=int32) >>> x.prod() jt.Var([30625], dtype=int32) >>> x.prod(dim=1) jt.Var([175 175], dtype=int32) >>> x.prod(dim=1, keepdims=True) jt.Var([[175] [175]], dtype=int32)
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.reindex()¶
Document: *
Reindex Operator is a one-to-many map operator. It performs equivalent Python-pseudo implementation below:
# input is x, output is y n = len(shape)-1 m = len(x.shape)-1 k = len(overflow_conditions)-1 y = np.zeros(shape, x.dtype) for i0 in range(shape[0]): # 1-st loop for i1 in range(shape[1]): # 2-nd loop ...... # many loops for in in range(shape[n]) # n+1 -th loop if is_overflow(i0,i1,...,in): y[i0,i1,...,in] = overflow_value else: # indexes[i] is a c++ style integer expression consisting of i0,i1,...,in y[i0,i1,...,in] = x[indexes[0],indexes[1],...,indexes[m]] # is_overflow is defined as following def is_overflow(i0,i1,...,in): return ( indexes[0] < 0 || indexes[0] >= x.shape[0] || indexes[1] < 0 || indexes[1] >= x.shape[1] || ...... indexes[m] < 0 || indexes[m] >= x.shape[m] || # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,in overflow_conditions[0] || overflow_conditions[1] || ...... overflow_conditions[k] )
[in] x: A input jittor Var
[in] shape: the output shape, a integer array
[in] indexes: array of c++ style integer expression, its length should be the same with the number of dimension of x, some buildin variables it can use are:
XDIM, xshape0, ..., xshapen, xstride0, ..., xstriden YDIM, yshape0, ..., yshapem, ystride0, ..., ystridem i0, i1, ..., in @e0(...), @e1(...) for extras input index e0p, e1p , ... for extras input pointer
[in] overflow_value: overflow value
[in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes
[in] extras: extra var used for index
Example Convolution implemented by reindex operation:
def conv(x, w): N,H,W,C = x.shape Kh, Kw, _C, Kc = w.shape assert C==_C xx = x.reindex([N,H-Kh+1,W-Kw+1,Kh,Kw,C,Kc], [ 'i0', # Nid 'i1+i3', # Hid+Khid 'i2+i4', # Wid+KWid 'i5', # Cid ]) ww = w.broadcast_var(xx) yy = xx*ww y = yy.sum([3,4,5]) # Kh, Kw, C return y, yy
Declaration: VarHolder* reindex(VarHolder* x, NanoVector shape, vector<string>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={}, vector<VarHolder*>&& extras={})
Document: * Alias x.reindex([i,j,k]) ->
x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])
Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={})
- jittor_core.ops.reindex_reduce()¶
Document: *
Reindex Reduce Operator is a many-to-one map operator. It performs equivalent Python-pseudo implementation below:
# input is y, output is x n = len(y.shape)-1 m = len(shape)-1 k = len(overflow_conditions)-1 x = np.zeros(shape, y.dtype) x[:] = initial_value(op) for i0 in range(y.shape[0]): # 1-st loop for i1 in range(y.shape[1]): # 2-nd loop ...... # many loops for in in range(y.shape[n]) # n+1 -th loop # indexes[i] is a c++ style integer expression consisting of i0,i1,...,in xi0,xi1,...,xim = indexes[0],indexes[1],...,indexes[m] if not is_overflow(xi0,xi1,...,xim): x[xi0,xi1,...,xim] = op(x[xi0,xi1,...,xim], y[i0,i1,...,in]) # is_overflow is defined as following def is_overflow(xi0,xi1,...,xim): return ( xi0 < 0 || xi0 >= shape[0] || xi1 < 0 || xi1 >= shape[1] || ...... xim < 0 || xim >= shape[m] || # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,in overflow_conditions[0] || overflow_conditions[1] || ...... overflow_conditions[k] )
[in] y: A input jittor Var
[in] op: a string represent the reduce operation type
[in] shape: the output shape, a integer array
[in] indexes: array of c++ style integer expression, its length should be the same with length of output shape, some buildin variables it can use are:
XDIM, xshape0, ..., xshapem, xstride0, ..., xstridem YDIM, yshape0, ..., yshapen, ystride0, ..., ystriden i0, i1, ..., in @e0(...), @e1(...) for extras input index e0p, e1p , ... for extras input pointer
[in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes.
[in] extras: extra var used for index
Example
Pooling implemented by reindex operation:
def pool(x, size, op): N,H,W,C = x.shape h = (H+size-1)//size w = (W+size-1)//size return x.reindex_reduce(op, [N,h,w,C], [ "i0", # Nid f"i1/{size}", # Hid f"i2/{size}", # Wid "i3", # Cid ])
Declaration: VarHolder* reindex_reduce(VarHolder* y, NanoString op, NanoVector shape, vector<string>&& indexes, vector<string>&& overflow_conditions={}, vector<VarHolder*>&& extras={})
- jittor_core.ops.reindex_var()¶
Document: * Alias x.reindex([i,j,k]) ->
x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])
Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={})
- jittor_core.ops.reshape()¶
Document: *
Returns a tensor with the same data and number of elements as input, but with the specified shape.
A single dimension may be -1, in which case it’s inferred from the remaining dimensions and the number of elements in input.
[in] x: the input jt.Var
[in] shape: the output shape, an integer array
- Example-1::
>>> a = jt.randint(0, 10, shape=(12,)) >>> a jt.Var([4 0 8 4 6 3 1 8 1 1 2 2], dtype=int32) >>> jt.reshape(a, (3, 4)) jt.Var([[4 0 8 4] [6 3 1 8] [1 1 2 2]], dtype=int32) >>> jt.reshape(a, (-1, 6)) jt.Var([[4 0 8 4 6 3] [1 8 1 1 2 2]], dtype=int32)
Declaration: VarHolder* reshape(VarHolder* x, NanoVector shape)
- jittor_core.ops.right_shift()¶
Document: *
Shifts the bits of
x
to the right byy
.This operation is equivalent to
x >> y
.[in] x: the first input, a python number or jt.Var (int32 or int64).
[in] y: the second input, a python number or jt.Var (int32 or int64).
- Example-1::
>>> a = jt.randint(0, 1024, shape=(3,)) >>> a jt.Var([439 113 92], dtype=int32) >>> b = jt.randint(0, 10, shape=(3,)) >>> b jt.Var([6 8 4], dtype=int32) >>> jt.right_shift(a, b) jt.Var([6 0 5], dtype=int32)
Declaration: VarHolder* right_shift(VarHolder* x, VarHolder* y)
- jittor_core.ops.round()¶
Document: *
Returns the closest integer of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 2.101595 0.33055413 -0.44147047 -0.7720668 ], dtype=float32) >>> jt.round(x) jt.Var([ 2.0 0.0 0.0 -1.0], dtype=float32) >>> x.round() jt.Var([ 2.0 0.0 0.0 -1.0], dtype=float32)
Declaration: VarHolder* round(VarHolder* x)
- jittor_core.ops.round_int()¶
Document: *
Returns the closest integer of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 2.101595 0.33055413 -0.44147047 -0.7720668 ], dtype=float32) >>> jt.round_int(x) jt.Var([ 2 0 0 -1], dtype=int32) >>> x.round_int jt.Var([ 2 0 0 -1], dtype=int32)
Declaration: VarHolder* round_int(VarHolder* x)
- jittor_core.ops.safe_clip()¶
Document: * Safe clip value to a range, and keep
the gradient pass thought.
[in] x: input value
[in] left: float64 clip min value.
[in] right: float64 clip max value.
Declaration: VarHolder* safe_clip(VarHolder* x, float64 left=-1e300, float64 right=1e300)
- jittor_core.ops.setitem()¶
Declaration: VarHolder* setitem(VarHolder* x, VarSlices&& slices, VarHolder* y, NanoString op=ns_void)
- jittor_core.ops.sigmoid()¶
Document: *
Returns the sigmoid of the input
x
.\[out_i = \frac{1}{1 + e^{x_i}}\][in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.49443012 0.4305426 -1.0364404 -1.2628382 ], dtype=float32) >>> jt.sigmoid(x) jt.Var([0.62114954 0.6060032 0.2618374 0.2204857 ], dtype=float32) >>> x.sigmoid() jt.Var([0.62114954 0.6060032 0.2618374 0.2204857 ], dtype=float32)
Declaration: VarHolder* sigmoid(VarHolder* x)
- jittor_core.ops.sin()¶
Document: *
Returns the sine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.sin(x) jt.Var([ 0.32303742 -0.6527857 -0.76586854 0.9738172 ], dtype=float32) >>> x.sin() jt.Var([ 0.32303742 -0.6527857 -0.76586854 0.9738172 ], dtype=float32)
Declaration: VarHolder* sin(VarHolder* x)
- jittor_core.ops.sinh()¶
Document: *
Returns the hyperbolic sine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.sinh(x) jt.Var([ 0.3349012 -0.77276015 -0.9873369 2.9425898 ], dtype=float32) >>> x.sinh jt.Var([ 0.3349012 -0.77276015 -0.9873369 2.9425898 ], dtype=float32)
Declaration: VarHolder* sinh(VarHolder* x)
- jittor_core.ops.sqrt()¶
Document: *
Returns the square root of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 >>> x jt.Var([0.81957287 0.5609612 0.07435933 1.7571875 ], dtype=float32) >>> jt.sqrt(x) jt.Var([0.90530264 0.7489734 0.27268907 1.3255895 ], dtype=float32) >>> x.sqrt() jt.Var([0.90530264 0.7489734 0.27268907 1.3255895 ], dtype=float32)
Declaration: VarHolder* sqrt(VarHolder* x)
- jittor_core.ops.sub()¶
Document: *
Element-wise subtract
y
fromx
and returns a new Var.This operation is equivalent to
x - y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* subtract(VarHolder* x, VarHolder* y)
- jittor_core.ops.subtract()¶
Document: *
Element-wise subtract
y
fromx
and returns a new Var.This operation is equivalent to
x - y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* subtract(VarHolder* x, VarHolder* y)
- jittor_core.ops.sum()¶
Document: *
Returns the sum of the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.sum(x) jt.Var([13], dtype=int32) >>> x.sum() jt.Var([13], dtype=int32) >>> x.sum(dim=1) jt.Var([7 6], dtype=int32) >>> x.sum(dim=1, keepdims=True) jt.Var([[7] [6]], dtype=int32)
Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.ops.tan()¶
Document: *
Returns the tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.tan(x) jt.Var([ 0.34133783 -0.8617148 -1.1910915 -4.283673 ], dtype=float32) >>> x.tan() jt.Var([ 0.34133783 -0.8617148 -1.1910915 -4.283673 ], dtype=float32)
Declaration: VarHolder* tan(VarHolder* x)
- jittor_core.ops.tanh()¶
Document: *
Returns the hyperbolic tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-0.85885596 1.187804 0.47249675 0.95933187], dtype=float32) >>> jt.tanh(x) jt.Var([-0.6956678 0.82989657 0.4402144 0.7439787 ], dtype=float32) >>> x.tanh() jt.Var([-0.6956678 0.82989657 0.4402144 0.7439787 ], dtype=float32)
Declaration: VarHolder* tanh(VarHolder* x)
- jittor_core.ops.tape()¶
Declaration: VarHolder* tape(VarHolder* x)
- jittor_core.ops.ternary()¶
Declaration: VarHolder* ternary(VarHolder* cond, VarHolder* x, VarHolder* y)
- jittor_core.ops.transpose()¶
Declaration: VarHolder* transpose(VarHolder* x, NanoVector axes=NanoVector())
- jittor_core.ops.uint16()¶
Document: *
Returns a copy of the input var, casted to unsigned int16.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.uint16() jt.Var([4 2 8], dtype=uint16) >>> jt.uint16(x) jt.Var([4 2 8], dtype=uint16)
Declaration: VarHolder* uint16_(VarHolder* x)
- jittor_core.ops.uint32()¶
Document: *
Returns a copy of the input var, casted to unsigned int32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.uint32() jt.Var([4 2 8], dtype=uint32) >>> jt.uint32(x) jt.Var([4 2 8], dtype=uint32)
Declaration: VarHolder* uint32_(VarHolder* x)
- jittor_core.ops.uint64()¶
Document: *
Returns a copy of the input var, casted to unsigned int64.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.uint64() jt.Var([4 2 8], dtype=uint64) >>> jt.uint64(x) jt.Var([4 2 8], dtype=uint64)
Declaration: VarHolder* uint64_(VarHolder* x)
- jittor_core.ops.uint8()¶
Document: *
Returns a copy of the input var, casted to unsigned int8.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.uint8() jt.Var([4 2 8], dtype=uint8) >>> jt.uint8(x) jt.Var([4 2 8], dtype=uint8)
Declaration: VarHolder* uint8_(VarHolder* x)
- jittor_core.ops.unary()¶
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
- jittor_core.ops.where()¶
Document: *
Where Operator generate index of true condition.
[in] cond: condition for index generation
[in] dtype: type of return indexes
[out] out: return an array of indexes, same length with number of dims of cond
Example:
jt.where([[0,0,1],[1,0,0]]) # return [jt.Var([0 1], dtype=int32), jt.Var([2 0], dtype=int32)]
Declaration: vector_to_tuple<VarHolder*> where(VarHolder* cond, NanoString dtype=ns_int32)
Document: *
Condition operator, perform cond ? x : y
Declaration: VarHolder* where_(VarHolder* cond, VarHolder* x, VarHolder* y)
jittor.Var¶
这里是Jittor的基础变量类的API文档。该API可以通过my_jittor_var.XXX
直接访问。
- jittor_core.Var.abs()¶
Document: *
Returns the absolute value of the input
x
.[in] x: the input jt.Var
- Example-1::
>>> jt.abs(jt.float32([-1, 0, 1])) jt.Var([1. 0. 1.], dtype=float32)
Declaration: VarHolder* abs(VarHolder* x)
- jittor_core.Var.acos()¶
Document: *
Returns the inverse cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.5876564 0.740723 -0.667666 0.5371753], dtype=float32) >>> jt.acos(x) jt.Var([0.9426371 0.7366504 2.3018656 1.0037117], dtype=float32) >>> x.acos() jt.Var([0.9426371 0.7366504 2.3018656 1.0037117], dtype=float32)
Declaration: VarHolder* acos(VarHolder* x)
- jittor_core.Var.acosh()¶
Document: *
Returns the inverse hyperbolic cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) + 1 >>> x jt.Var([1.3609099 1.8137748 1.1146184 1.3911307], dtype=float32) >>> jt.acosh(x) jt.Var([0.8259237 1.2020639 0.47432774 0.8579033 ], dtype=float32) >>> x.acosh() jt.Var([0.8259237 1.2020639 0.47432774 0.8579033 ], dtype=float32)
Declaration: VarHolder* acosh(VarHolder* x)
- jittor_core.Var.add()¶
Document: *
Element-wise adds
x
andy
and returns a new Var.This operation is equivalent to
x + y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* add(VarHolder* x, VarHolder* y)
- jittor_core.Var.all_()¶
Document: *
Tests if all elements in input evaluate to True.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(2, shape=(2, 3)) >>> x jt.Var([[1 1 1] [0 1 0]], dtype=int32) >>> jt.all_(x) jt.Var([False], dtype=int32) >>> x.all_() jt.Var([False], dtype=int32) >>> x.all_(dim=1) jt.Var([True False], dtype=int32) >>> x.all_(dim=1, keepdims=True) jt.Var([[True] [False]], dtype=int32)
Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.any_()¶
Document: *
Tests if any elements in input evaluate to True.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(2, shape=(2, 3)) >>> x jt.Var([[1 0 1] [0 0 0]], dtype=int32) >>> jt.any_(x) jt.Var([True], dtype=int32) >>> x.any_() jt.Var([True], dtype=int32) >>> x.any_(dim=1) jt.Var([True False], dtype=int32) >>> x.any_(dim=1, keepdims=True) jt.Var([[True] [False]], dtype=int32)
Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.arccos()¶
Document: *
Returns the inverse cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.5876564 0.740723 -0.667666 0.5371753], dtype=float32) >>> jt.acos(x) jt.Var([0.9426371 0.7366504 2.3018656 1.0037117], dtype=float32) >>> x.acos() jt.Var([0.9426371 0.7366504 2.3018656 1.0037117], dtype=float32)
Declaration: VarHolder* acos(VarHolder* x)
- jittor_core.Var.arccosh()¶
Document: *
Returns the inverse hyperbolic cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) + 1 >>> x jt.Var([1.3609099 1.8137748 1.1146184 1.3911307], dtype=float32) >>> jt.acosh(x) jt.Var([0.8259237 1.2020639 0.47432774 0.8579033 ], dtype=float32) >>> x.acosh() jt.Var([0.8259237 1.2020639 0.47432774 0.8579033 ], dtype=float32)
Declaration: VarHolder* acosh(VarHolder* x)
- jittor_core.Var.arcsin()¶
Document: *
Returns the arcsine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.09342023 -0.42522037 0.9264933 -0.785264 ], dtype=float32) >>> jt.asin(x) jt.Var([ 0.09355665 -0.43920535 1.1849847 -0.9031224 ], dtype=float32) >>> x.asin() jt.Var([ 0.09355665 -0.43920535 1.1849847 -0.9031224 ], dtype=float32)
Declaration: VarHolder* asin(VarHolder* x)
- jittor_core.Var.arcsinh()¶
Document: *
Returns the inverse hyperbolic sine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.9749726 -0.52341473 0.8906148 1.0338128 ], dtype=float32) >>> jt.asinh(x) jt.Var([-1.4323865 -0.5020559 0.8018747 0.90508187], dtype=float32) >>> x.asinh() jt.Var([-1.4323865 -0.5020559 0.8018747 0.90508187], dtype=float32)
Declaration: VarHolder* asinh(VarHolder* x)
- jittor_core.Var.arctan()¶
Document: *
Returns the inverse tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-0.85885596 1.187804 0.47249675 0.95933187], dtype=float32) >>> jt.atan(x) jt.Var([-0.70961297 0.87102956 0.44140393 0.76464504], dtype=float32) >>> x.atan() jt.Var([-0.70961297 0.87102956 0.44140393 0.76464504], dtype=float32)
Declaration: VarHolder* atan(VarHolder* x)
- jittor_core.Var.arctanh()¶
Document: *
Returns the inverse hyperbolic tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.9062414 -0.799802 -0.27219176 -0.7274077 ], dtype=float32) >>> jt.atanh(x) jt.Var([ 1.5060828 -1.0980625 -0.27922946 -0.9231999 ], dtype=float32) >>> x.atanh() jt.Var([ 1.5060828 -1.0980625 -0.27922946 -0.9231999 ], dtype=float32)
Declaration: VarHolder* atanh(VarHolder* x)
- jittor_core.Var.arg_reduce()¶
Document: *
Returns the indices of the maximum / minimum of the input across a dimension.
[in] x: the input jt.Var.
[in] op: “max” or “min”.
[in] dim: int. Specifies which dimension to be reduced.
[in] keepdims: bool. Whether the output has
dim
retained or not.
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 3)) >>> x jt.Var([[4 2 5] [6 7 1]], dtype=int32) >>> jt.arg_reduce(x, 'max', dim=1, keepdims=False) [jt.Var([2 1], dtype=int32), jt.Var([5 7], dtype=int32)] >>> jt.arg_reduce(x, 'min', dim=1, keepdims=False) [jt.Var([1 2], dtype=int32), jt.Var([2 1], dtype=int32)]
Declaration: vector_to_tuple<VarHolder*> arg_reduce(VarHolder* x, NanoString op, int dim, bool keepdims)
- jittor_core.Var.argsort()¶
Document: *
Argsort Operator Perform an indirect sort by given key or compare function.
x is input, y is output index, satisfy:
x[y[0]] <= x[y[1]] <= x[y[2]] <= … <= x[y[n]]
or
key(y[0]) <= key(y[1]) <= key(y[2]) <= … <= key(y[n])
or
compare(y[0], y[1]) && compare(y[1], y[2]) && …
[in] x: input var for sort
[in] dim: sort alone which dim
[in] descending: the elements are sorted in descending order or not(default False).
[in] dtype: type of return indexes
[out] index: index have the same size with sorted dim
[out] value: sorted value
Example:
index, value = jt.argsort([11,13,12]) # return [0 2 1], [11 12 13] index, value = jt.argsort([11,13,12], descending=True) # return [1 2 0], [13 12 11] index, value = jt.argsort([[11,13,12], [12,11,13]]) # return [[0 2 1],[1 0 2]], [[11 12 13],[11 12 13]] index, value = jt.argsort([[11,13,12], [12,11,13]], dim=0) # return [[0 1 0],[1 0 1]], [[11 11 12],[12 13 13]]
Declaration: vector_to_tuple<VarHolder*> argsort(VarHolder* x, int dim=-1, bool descending=false, NanoString dtype=ns_int32)
- jittor_core.Var.asin()¶
Document: *
Returns the arcsine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.09342023 -0.42522037 0.9264933 -0.785264 ], dtype=float32) >>> jt.asin(x) jt.Var([ 0.09355665 -0.43920535 1.1849847 -0.9031224 ], dtype=float32) >>> x.asin() jt.Var([ 0.09355665 -0.43920535 1.1849847 -0.9031224 ], dtype=float32)
Declaration: VarHolder* asin(VarHolder* x)
- jittor_core.Var.asinh()¶
Document: *
Returns the inverse hyperbolic sine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.9749726 -0.52341473 0.8906148 1.0338128 ], dtype=float32) >>> jt.asinh(x) jt.Var([-1.4323865 -0.5020559 0.8018747 0.90508187], dtype=float32) >>> x.asinh() jt.Var([-1.4323865 -0.5020559 0.8018747 0.90508187], dtype=float32)
Declaration: VarHolder* asinh(VarHolder* x)
- jittor_core.Var.assign()¶
Document: *
assign the data from another Var.
Declaration: VarHolder* assign(VarHolder* v)
- jittor_core.Var.astype()¶
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
- jittor_core.Var.atan()¶
Document: *
Returns the inverse tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-0.85885596 1.187804 0.47249675 0.95933187], dtype=float32) >>> jt.atan(x) jt.Var([-0.70961297 0.87102956 0.44140393 0.76464504], dtype=float32) >>> x.atan() jt.Var([-0.70961297 0.87102956 0.44140393 0.76464504], dtype=float32)
Declaration: VarHolder* atan(VarHolder* x)
- jittor_core.Var.atanh()¶
Document: *
Returns the inverse hyperbolic tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.9062414 -0.799802 -0.27219176 -0.7274077 ], dtype=float32) >>> jt.atanh(x) jt.Var([ 1.5060828 -1.0980625 -0.27922946 -0.9231999 ], dtype=float32) >>> x.atanh() jt.Var([ 1.5060828 -1.0980625 -0.27922946 -0.9231999 ], dtype=float32)
Declaration: VarHolder* atanh(VarHolder* x)
- jittor_core.Var.binary()¶
Declaration: VarHolder* binary(VarHolder* x, VarHolder* y, NanoString p)
- jittor_core.Var.bitwise_and()¶
Document: *
Computes the bitwise AND of x and y.
[in] x: the first input, jt.Var (integal or boolean).
[in] y: the second input, jt.Var (integal or boolean).
Declaration: VarHolder* bitwise_and(VarHolder* x, VarHolder* y)
- jittor_core.Var.bitwise_not()¶
Document: *
Returns the bitwise NOT of the input
x
.[in] x: the input jt.Var, integal or boolean.
- Example-1::
>>> jt.bitwise_not(jt.int32([1, 2, -3])) jt.Var([-2 -3 2], dtype=int32)
Declaration: VarHolder* bitwise_not(VarHolder* x)
- jittor_core.Var.bitwise_or()¶
Document: *
Computes the bitwise OR of x and y.
[in] x: the first input, jt.Var (integal or boolean).
[in] y: the second input, jt.Var (integal or boolean).
Declaration: VarHolder* bitwise_or(VarHolder* x, VarHolder* y)
- jittor_core.Var.bitwise_xor()¶
Document: *
Computes the bitwise XOR of x and y.
[in] x: the first input, jt.Var (integal or boolean).
[in] y: the second input, jt.Var (integal or boolean).
Declaration: VarHolder* bitwise_xor(VarHolder* x, VarHolder* y)
- jittor_core.Var.bool()¶
Document: *
Returns a copy of the input var, casted to boolean.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.arange(3) >>> x jt.Var([0 1 2], dtype=int32) >>> x.bool() jt.Var([False True True], dtype=bool) >>> jt.bool(x) jt.Var([False True True], dtype=bool)
Declaration: VarHolder* bool_(VarHolder* x)
- jittor_core.Var.broadcast()¶
Document: *
Broadcast
x
to a given shape.[in] x: the input jt.Var.
[in] shape: the output shape.
[in] dims: specifies the new dimension in the output shape, an integer array.
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 2)) >>> x jt.Var([[8 1] [7 6]], dtype=int32) >>> jt.broadcast(x, shape=(2, 3, 2), dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32)
Declaration: VarHolder* broadcast_to(VarHolder* x, NanoVector shape, NanoVector dims=NanoVector())
Document: *
Broadcast
x
to the same shape asy
.[in] x: the input jt.Var.
[in] y: the reference jt.Var.
[in] dims: specifies the new dimension in the output shape, an integer array.
注解
jt.broadcast_var(x, y, dims) is an alias of jt.broadcast(x, y, dims)
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 2)) >>> x jt.Var([[8 1] [7 6]], dtype=int32) >>> y = jt.randint(0, 10, shape=(2, 3, 2)) >>> jt.broadcast(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32) >>> jt.broadcast_var(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32)
Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
- jittor_core.Var.broadcast_var()¶
Document: *
Broadcast
x
to the same shape asy
.[in] x: the input jt.Var.
[in] y: the reference jt.Var.
[in] dims: specifies the new dimension in the output shape, an integer array.
注解
jt.broadcast_var(x, y, dims) is an alias of jt.broadcast(x, y, dims)
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 2)) >>> x jt.Var([[8 1] [7 6]], dtype=int32) >>> y = jt.randint(0, 10, shape=(2, 3, 2)) >>> jt.broadcast(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32) >>> jt.broadcast_var(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32)
Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
- jittor_core.Var.candidate()¶
Document: *
Candidate Operator Perform an indirect candidate filter by given a fail condition.
x is input, y is output index, satisfy:
not fail_cond(y[0], y[1]) and not fail_cond(y[0], y[2]) and not fail_cond(y[1], y[2]) and ... ... and not fail_cond(y[m-2], y[m-1])
Where m is number of selected candidates.
Pseudo code:
y = [] for i in range(n): pass = True for j in y: if (@fail_cond): pass = false break if (pass): y.append(i) return y
[in] x: input var for filter
[in] fail_cond: code for fail condition
[in] dtype: type of return indexes
[out] index: .
Example:
jt.candidate(jt.random(100,2), '(@x(j,0)>@x(i,0))or(@x(j,1)>@x(i,1))') # return y satisfy: # x[y[0], 0] <= x[y[1], 0] and x[y[1], 0] <= x[y[2], 0] and ... and x[y[m-2], 0] <= x[y[m-1], 0] and # x[y[0], 1] <= x[y[1], 1] and x[y[1], 1] <= x[y[2], 1] and ... and x[y[m-2], 1] <= x[y[m-1], 1]
Declaration: VarHolder* candidate(VarHolder* x, string&& fail_cond, NanoString dtype=ns_int32)
- jittor_core.Var.cast()¶
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
- jittor_core.Var.ceil()¶
Document: *
Returns the smallest integer greater than or equal to the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.0339162 -0.7259972 -0.9220003 -0.8449701], dtype=float32) >>> jt.ceil(x) jt.Var([-1.0 0.0 0.0 0.0], dtype=float32) >>> x.ceil() jt.Var([-1.0 0.0 0.0 0.0], dtype=float32)
Declaration: VarHolder* ceil(VarHolder* x)
- jittor_core.Var.ceil_int()¶
Document: *
Returns the smallest integer greater than or equal to the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.0339162 -0.7259972 -0.9220003 -0.8449701], dtype=float32) >>> jt.ceil_int(x) jt.Var([-1 0 0 0], dtype=int32) >>> x.ceil_int() jt.Var([-1 0 0 0], dtype=int32)
Declaration: VarHolder* ceil_int(VarHolder* x)
- jittor_core.Var.check_cascade_setitem()¶
- Document:
check a[x][y] = c
Declaration: VarHolder* check_cascade_setitem(VarHolder* out)
- jittor_core.Var.clone()¶
Declaration: VarHolder* clone(VarHolder* x)
- jittor_core.Var.compile_options¶
Declaration: inline loop_options_t compile_options()
- jittor_core.Var.copy()¶
Declaration: VarHolder* copy(VarHolder* x)
- jittor_core.Var.cos()¶
Document: *
Returns the cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.cos(x) jt.Var([ 0.9463862 0.7575426 0.6429972 -0.2273323], dtype=float32) >>> x.cos() jt.Var([ 0.9463862 0.7575426 0.6429972 -0.2273323], dtype=float32)
Declaration: VarHolder* cos(VarHolder* x)
- jittor_core.Var.cosh()¶
Document: *
Returns the hyperbolic cosine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.cosh(x) jt.Var([1.0545894 1.2637873 1.405288 3.1078668], dtype=float32) >>> x.cosh() jt.Var([1.0545894 1.2637873 1.405288 3.1078668], dtype=float32)
Declaration: VarHolder* cosh(VarHolder* x)
- jittor_core.Var.data¶
Document: *
get a numpy array which shares the data with the Var.
Declaration: inline DataView data()
- jittor_core.Var.debug_msg()¶
Document: *
print the information of the Var to debug.
Declaration: string debug_msg()
- jittor_core.Var.detach()¶
- Document:
detach the grad
Declaration: inline VarHolder* detach()
- jittor_core.Var.detach_inplace()¶
Document: *
enable the gradient calculation for the Var.
Declaration: inline VarHolder* start_grad()
- jittor_core.Var.dim()¶
Document: *
return the number of dimensions.
Declaration: inline int ndim()
- jittor_core.Var.div()¶
Document: *
Element-wise divide
x
byy
and returns a new Var.This operation is equivalent to
x / y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
- Example-1::
>>> a = jt.empty((3,), dtype=jt.int32) >>> a jt.Var([707406378 707406378 707406378], dtype=int32) >>> b = jt.empty((3,), dtype=jt.int32) >>> b jt.Var([674510453 171649398 538976288], dtype=int32) >>> jt.divide(a, b) jt.Var([1.0487701 4.1212287 1.3125001], dtype=float32) >>> a / b jt.Var([1.0487701 4.1212287 1.3125001], dtype=float32)
returns float value even if the dtype of input Vars are both integers. @see jt.ops.floor_divide() for floor division.
Declaration: VarHolder* divide(VarHolder* x, VarHolder* y)
- jittor_core.Var.divide()¶
Document: *
Element-wise divide
x
byy
and returns a new Var.This operation is equivalent to
x / y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
- Example-1::
>>> a = jt.empty((3,), dtype=jt.int32) >>> a jt.Var([707406378 707406378 707406378], dtype=int32) >>> b = jt.empty((3,), dtype=jt.int32) >>> b jt.Var([674510453 171649398 538976288], dtype=int32) >>> jt.divide(a, b) jt.Var([1.0487701 4.1212287 1.3125001], dtype=float32) >>> a / b jt.Var([1.0487701 4.1212287 1.3125001], dtype=float32)
returns float value even if the dtype of input Vars are both integers. @see jt.ops.floor_divide() for floor division.
Declaration: VarHolder* divide(VarHolder* x, VarHolder* y)
- jittor_core.Var.double()¶
Document: *
Returns a copy of the input var, casted to float64 (double-precision float).
[in] x: the input jt.Var
- Example-1::
>>> x = jt.arange(3) >>> x jt.Var([0 1 2], dtype=int32) >>> x.double() jt.Var([0. 1. 2.], dtype=float64) >>> jt.double(x) jt.Var([0. 1. 2.], dtype=float64) >>> x.float64() jt.Var([0. 1. 2.], dtype=float64) >>> jt.float64(x) jt.Var([0. 1. 2.], dtype=float64)
Declaration: VarHolder* float64_(VarHolder* x)
- jittor_core.Var.dtype¶
Document: *
return the data type of the Var.
Declaration: inline NanoString dtype()
- jittor_core.Var.equal()¶
Document: *
Returns
x == y
element-wise.This operation is equivalent to
x == y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* equal(VarHolder* x, VarHolder* y)
- jittor_core.Var.erf()¶
Document: *
Computes the error function of each element. The error function is defined as follows:
\[erf(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} dt\][in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.49443012 0.4305426 -1.0364404 -1.2628382 ], dtype=float32) >>> jt.erf(x) jt.Var([ 0.51559156 0.45739546 -0.85728306 -0.9258883 ], dtype=float32) >>> x.erf() jt.Var([ 0.51559156 0.45739546 -0.85728306 -0.9258883 ], dtype=float32)
Declaration: VarHolder* erf(VarHolder* x)
- jittor_core.Var.erfinv()¶
Document: *
Computes the inverse error function of each element.
[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 - 1 >>> x jt.Var([ 0.00277209 -0.26642472 0.7869792 0.5415418 ], dtype=float32) >>> jt.erfinv(x) jt.Var([ 0.00245671 -0.24068035 0.8805613 0.5242405 ], dtype=float32) >>> x.erfinv() jt.Var([ 0.00245671 -0.24068035 0.8805613 0.5242405 ], dtype=float32)
Declaration: VarHolder* erfinv(VarHolder* x)
- jittor_core.Var.exp()¶
Document: *
Returns the exponential of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 >>> x jt.Var([1.9841381 1.4103996 0.5855549 1.4212812], dtype=float32) >>> jt.exp(x) jt.Var([7.2727766 4.0975924 1.7959872 4.1424246], dtype=float32) >>> x.exp() jt.Var([7.2727766 4.0975924 1.7959872 4.1424246], dtype=float32)
Declaration: VarHolder* exp(VarHolder* x)
- jittor_core.Var.expand_as()¶
Document: *
Broadcast
x
to the same shape asy
.[in] x: the input jt.Var.
[in] y: the reference jt.Var.
[in] dims: specifies the new dimension in the output shape, an integer array.
注解
jt.broadcast_var(x, y, dims) is an alias of jt.broadcast(x, y, dims)
- Example-1::
>>> x = jt.randint(0, 10, shape=(2, 2)) >>> x jt.Var([[8 1] [7 6]], dtype=int32) >>> y = jt.randint(0, 10, shape=(2, 3, 2)) >>> jt.broadcast(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32) >>> jt.broadcast_var(x, y, dims=[1]) jt.Var([[[8 1] [8 1] [8 1]], [[7 6] [7 6] [7 6]]], dtype=int32)
Declaration: VarHolder* broadcast_to_(VarHolder* x, VarHolder* y, NanoVector dims=NanoVector())
- jittor_core.Var.fetch_sync()¶
Document: *
Returns a numpy array copy of the Var.
Declaration: ArrayArgs fetch_sync()
- jittor_core.Var.float()¶
Document: *
Returns a copy of the input var, casted to float32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.arange(3) >>> x jt.Var([0 1 2], dtype=int32) >>> x.float() jt.Var([0. 1. 2.], dtype=float32) >>> jt.float(x) jt.Var([0. 1. 2.], dtype=float32) >>> x.float32() jt.Var([0. 1. 2.], dtype=float32) >>> jt.float32(x) jt.Var([0. 1. 2.], dtype=float32)
Declaration: VarHolder* float32_(VarHolder* x)
- jittor_core.Var.float16()¶
Document: *
Returns a copy of the input var, casted to float16 (half-precision float).
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.half() jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> jt.half(x) jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> x.float16() jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> jt.float16(x) jt.Var([4.094 2.008 8.48 ], dtype=float16)
Declaration: VarHolder* float16_(VarHolder* x)
- jittor_core.Var.float32()¶
Document: *
Returns a copy of the input var, casted to float32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.arange(3) >>> x jt.Var([0 1 2], dtype=int32) >>> x.float() jt.Var([0. 1. 2.], dtype=float32) >>> jt.float(x) jt.Var([0. 1. 2.], dtype=float32) >>> x.float32() jt.Var([0. 1. 2.], dtype=float32) >>> jt.float32(x) jt.Var([0. 1. 2.], dtype=float32)
Declaration: VarHolder* float32_(VarHolder* x)
- jittor_core.Var.float64()¶
Document: *
Returns a copy of the input var, casted to float64 (double-precision float).
[in] x: the input jt.Var
- Example-1::
>>> x = jt.arange(3) >>> x jt.Var([0 1 2], dtype=int32) >>> x.double() jt.Var([0. 1. 2.], dtype=float64) >>> jt.double(x) jt.Var([0. 1. 2.], dtype=float64) >>> x.float64() jt.Var([0. 1. 2.], dtype=float64) >>> jt.float64(x) jt.Var([0. 1. 2.], dtype=float64)
Declaration: VarHolder* float64_(VarHolder* x)
- jittor_core.Var.floor()¶
Document: *
Returns the largest integer less than or equal to the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.0339162 -0.7259972 -0.9220003 -0.8449701], dtype=float32) >>> jt.floor(x) jt.Var([-2.0 -1.0 -1.0 -1.0], dtype=float32) >>> x.floor jt.Var([-2.0 -1.0 -1.0 -1.0], dtype=float32)
Declaration: VarHolder* floor(VarHolder* x)
- jittor_core.Var.floor_divide()¶
Document: *
Element-wise divide
x
byy
and returns the floor of the result.This operation is equivalent to
x // y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
- Example-1::
>>> a = jt.randint(1, 10, (3,), dtype=jt.int32) >>> a jt.Var([9 2 7], dtype=int32) >>> b = jt.randint(1, 10, (3,), dtype=jt.int32) >>> b jt.Var([6 4 6], dtype=int32) >>> jt.floor_divide(a, b) jt.Var([1 0 1], dtype=int32) >>> a // b jt.Var([1 0 1], dtype=int32)
Declaration: VarHolder* floor_divide(VarHolder* x, VarHolder* y)
- jittor_core.Var.floor_int()¶
Document: *
Returns the largest integer less than or equal to the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-1.0339162 -0.7259972 -0.9220003 -0.8449701], dtype=float32) >>> jt.floor_int(x) jt.Var([-2 -1 -1 -1], dtype=int32) >>> x.floor_int jt.Var([-2 -1 -1 -1], dtype=int32)
Declaration: VarHolder* floor_int(VarHolder* x)
- jittor_core.Var.fuse_transpose()¶
Declaration: VarHolder* fuse_transpose(VarHolder* x, NanoVector axes=NanoVector())
- jittor_core.Var.getitem()¶
Declaration: VarHolder* getitem(VarHolder* x, VarSlices&& slices)
Declaration: vector_to_tuple<VarHolder*> getitem_(VarHolder* x, VarSlices&& slices, int _)
- jittor_core.Var.grad¶
- Document:
Jittor Var doesn’t have this interface, please change your code as below:
model = Model() optimizer = SGD(model.parameters()) ... optimizer.backward(loss) for p in model.parameters(): # prev code: # grad = p.grad # change to: grad = p.opt_grad(optimizer)
Declaration: int grad()
- jittor_core.Var.greater()¶
Document: *
Returns
x > y
element-wise.This operation is equivalent to
x > y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* greater(VarHolder* x, VarHolder* y)
- jittor_core.Var.greater_equal()¶
Document: *
Returns
x >= y
element-wise.This operation is equivalent to
x >= y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* greater_equal(VarHolder* x, VarHolder* y)
- jittor_core.Var.half()¶
Document: *
Returns a copy of the input var, casted to float16 (half-precision float).
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.half() jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> jt.half(x) jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> x.float16() jt.Var([4.094 2.008 8.48 ], dtype=float16) >>> jt.float16(x) jt.Var([4.094 2.008 8.48 ], dtype=float16)
Declaration: VarHolder* float16_(VarHolder* x)
- jittor_core.Var.index()¶
Document: * shape dependency version of index op
jt.index_var(a, 1) similar with jt.index(a.shape, 1)
Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)
Document: * shape dependency version of index op
jt.index_var(a) similar with jt.index(a.shape)
Declaration: vector_to_tuple<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)
- jittor_core.Var.index_var()¶
Document: * shape dependency version of index op
jt.index_var(a, 1) similar with jt.index(a.shape, 1)
Declaration: VarHolder* index__(VarHolder* a, int64 dim, NanoString dtype=ns_int32)
Document: * shape dependency version of index op
jt.index_var(a) similar with jt.index(a.shape)
Declaration: vector_to_tuple<VarHolder*> index___(VarHolder* a, NanoString dtype=ns_int32)
- jittor_core.Var.int()¶
Document: *
Returns a copy of the input var, casted to int32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int() jt.Var([4 2 8], dtype=int32) >>> jt.int(x) jt.Var([4 2 8], dtype=int32) >>> x.int32() jt.Var([4 2 8], dtype=int32) >>> jt.int32(x) jt.Var([4 2 8], dtype=int32) >>> x.long() jt.Var([4 2 8], dtype=int32) >>> jt.long(x) jt.Var([4 2 8], dtype=int32)
Declaration: VarHolder* int32_(VarHolder* x)
- jittor_core.Var.int16()¶
Document: *
Returns a copy of the input var, casted to int16.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int16() jt.Var([4 2 8], dtype=int16) >>> jt.int16(x) jt.Var([4 2 8], dtype=int16)
Declaration: VarHolder* int16_(VarHolder* x)
- jittor_core.Var.int32()¶
Document: *
Returns a copy of the input var, casted to int32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int() jt.Var([4 2 8], dtype=int32) >>> jt.int(x) jt.Var([4 2 8], dtype=int32) >>> x.int32() jt.Var([4 2 8], dtype=int32) >>> jt.int32(x) jt.Var([4 2 8], dtype=int32) >>> x.long() jt.Var([4 2 8], dtype=int32) >>> jt.long(x) jt.Var([4 2 8], dtype=int32)
Declaration: VarHolder* int32_(VarHolder* x)
- jittor_core.Var.int64()¶
Document: *
Returns a copy of the input var, casted to int64.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int64() jt.Var([4 2 8], dtype=int64) >>> jt.int64(x) jt.Var([4 2 8], dtype=int64)
Declaration: VarHolder* int64_(VarHolder* x)
- jittor_core.Var.int8()¶
Document: *
Returns a copy of the input var, casted to int8.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int8() jt.Var([4 2 8], dtype=int8) >>> jt.int8(x) jt.Var([4 2 8], dtype=int8)
Declaration: VarHolder* int8_(VarHolder* x)
- jittor_core.Var.is_stop_fuse()¶
Document: *
return True if operator fusion is stopped.
Declaration: inline bool is_stop_fuse()
- jittor_core.Var.is_stop_grad()¶
Document: *
return True if the gradient is stopped.
Declaration: inline bool is_stop_grad()
- jittor_core.Var.item()¶
Document: *
returns the Python number if the Var contains only one element.
For other cases, see data().
Declaration: ItemData item()
- jittor_core.Var.left_shift()¶
Document: *
Shifts the bits of
x
to the left byy
.Bits are shifted to the left by appending
y
0s at the right ofx
. This operation is equivalent tox << y
.[in] x: the first input, a python number or jt.Var (int32 or int64).
[in] y: the second input, a python number or jt.Var (int32 or int64).
- Example-1::
>>> a = jt.randint(0, 10, shape=(3,)) >>> a jt.Var([7 6 7], dtype=int32) >>> b = jt.randint(0, 10, shape=(3,)) >>> b jt.Var([3 9 8], dtype=int32) >>> jt.left_shift(a, b) jt.Var([ 56 3072 1792], dtype=int32) >>> a << b jt.Var([ 56 3072 1792], dtype=int32)
Declaration: VarHolder* left_shift(VarHolder* x, VarHolder* y)
- jittor_core.Var.less()¶
Document: *
Returns
x < y
element-wise.This operation is equivalent to
x < y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* less(VarHolder* x, VarHolder* y)
- jittor_core.Var.less_equal()¶
Document: *
Returns
x <= y
element-wise.This operation is equivalent to
x <= y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* less_equal(VarHolder* x, VarHolder* y)
- jittor_core.Var.location()¶
Declaration: inline string location()
- jittor_core.Var.log()¶
Document: *
Returns the natural logarithm of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 >>> x jt.Var([0.02863695 1.30122 1.6048753 1.140261 ], dtype=float32) >>> jt.log(x) jt.Var([-3.5530574 0.26330233 0.47304606 0.13125724], dtype=float32) >>> x.log() jt.Var([-3.5530574 0.26330233 0.47304606 0.13125724], dtype=float32)
Declaration: VarHolder* log(VarHolder* x)
- jittor_core.Var.logical_and()¶
Document: *
Returns the element-wise logical AND of the inputs.
[in] x: the first input, jt.Var.
[in] y: the second input, jt.Var.
Declaration: VarHolder* logical_and(VarHolder* x, VarHolder* y)
- jittor_core.Var.logical_not()¶
Document: *
Returns the logical NOT of the input
x
.[in] x: the input jt.Var, integal or boolean.
- Example-1::
>>> jt.logical_not(jt.int32([-1, 0, 1])) jt.Var([False True False], dtype=bool)
Declaration: VarHolder* logical_not(VarHolder* x)
- jittor_core.Var.logical_or()¶
Document: *
Returns the element-wise logical OR of the inputs.
[in] x: the first input, jt.Var.
[in] y: the second input, jt.Var.
Declaration: VarHolder* logical_or(VarHolder* x, VarHolder* y)
- jittor_core.Var.logical_xor()¶
Document: *
Returns the element-wise logical XOR of the inputs.
[in] x: the first input, jt.Var.
[in] y: the second input, jt.Var.
Declaration: VarHolder* logical_xor(VarHolder* x, VarHolder* y)
- jittor_core.Var.long()¶
Document: *
Returns a copy of the input var, casted to int32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.int() jt.Var([4 2 8], dtype=int32) >>> jt.int(x) jt.Var([4 2 8], dtype=int32) >>> x.int32() jt.Var([4 2 8], dtype=int32) >>> jt.int32(x) jt.Var([4 2 8], dtype=int32) >>> x.long() jt.Var([4 2 8], dtype=int32) >>> jt.long(x) jt.Var([4 2 8], dtype=int32)
Declaration: VarHolder* int32_(VarHolder* x)
- jittor_core.Var.max()¶
Document: *
Returns the maximum elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.max(x) jt.Var([4], dtype=int32) >>> x.max() jt.Var([4], dtype=int32) >>> x.max(dim=1) jt.Var([4 4], dtype=int32) >>> x.max(dim=1, keepdims=True) jt.Var([[4] [4]], dtype=int32)
Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.maximum()¶
Document: *
Returns the element-wise maximum of
x
andy
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* maximum(VarHolder* x, VarHolder* y)
- jittor_core.Var.mean()¶
Document: *
Returns the mean value of the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[9 4 4] [1 9 6]], dtype=int32) >>> jt.mean(x) jt.Var([5.5000005], dtype=float32) >>> x.mean() jt.Var([5.5000005], dtype=float32) >>> x.mean(dim=1) jt.Var([5.666667 5.3333335], dtype=float32) >>> x.mean(dim=1, keepdims=True) jt.Var([[5.666667 ] [5.3333335]], dtype=float32)
Declaration: VarHolder* reduce_mean(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_mean_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_mean__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.min()¶
Document: *
Returns the minimum elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.min(x) jt.Var([0], dtype=int32) >>> x.min() jt.Var([0], dtype=int32) >>> x.min(dim=1) jt.Var([1 0], dtype=int32) >>> x.min(dim=1, keepdims=True) jt.Var([[1] [0]], dtype=int32)
Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.minimum()¶
Document: *
Returns the element-wise minimum of
x
andy
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* minimum(VarHolder* x, VarHolder* y)
- jittor_core.Var.mod()¶
Document: *
Returns the element-wise remainder of division.
This operation is equivalent to
x % y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
- Example-1::
>>> a = jt.rand(3) >>> a jt.Var([0.3989529 0.20159635 0.22973768], dtype=float32) >>> b = jt.rand(3) >>> b jt.Var([0.20121202 0.7704864 0.5654395 ], dtype=float32) >>> jt.mod(a, b) jt.Var([0.19774088 0.20159635 0.22973768], dtype=float32) >>> a % b jt.Var([0.19774088 0.20159635 0.22973768], dtype=float32)
Declaration: VarHolder* mod(VarHolder* x, VarHolder* y)
- jittor_core.Var.mul()¶
Document: *
Element-wise muliplies
x
withy
and returns a new Var.This operation is equivalent to
x * y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* multiply(VarHolder* x, VarHolder* y)
- jittor_core.Var.multiply()¶
Document: *
Element-wise muliplies
x
withy
and returns a new Var.This operation is equivalent to
x * y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* multiply(VarHolder* x, VarHolder* y)
- jittor_core.Var.name()¶
Document: *
set the name of the Var.
Declaration: inline VarHolder* name(const char* s)
Document: *
return the name of the Var.
Declaration: inline const char* name()
- jittor_core.Var.nbytes¶
Document: *
return the number of bytes of this Var.
Declaration: inline int64 nbytes()
- jittor_core.Var.ndim¶
Document: *
return the number of dimensions.
Declaration: inline int ndim()
- jittor_core.Var.negative()¶
Document: *
Returns the negative value of the input
x
.This operator is equavilant to
-x
.[in] x: the input jt.Var.
- Example-1::
>>> jt.negative(jt.float32([-1, 0, 1])) jt.Var([ 1. -0. -1.], dtype=float32)
Declaration: VarHolder* negative(VarHolder* x)
- jittor_core.Var.not_equal()¶
Document: *
Returns
x != y
element-wise.This operation is equivalent to
x != y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* not_equal(VarHolder* x, VarHolder* y)
- jittor_core.Var.numel()¶
Document: *
return the number of elements in the Var.
Declaration: inline int64 numel()
- jittor_core.Var.numpy()¶
Document: *
Returns a numpy array copy of the Var.
Declaration: ArrayArgs fetch_sync()
- jittor_core.Var.out_hint()¶
Document: *
output hint for training optimization
Declaration: inline VarHolder* out_hint()
- jittor_core.Var.prod()¶
Document: *
Returns the product of all the elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[7 5 5] [5 7 5]], dtype=int32) >>> jt.prod(x) jt.Var([30625], dtype=int32) >>> x.prod() jt.Var([30625], dtype=int32) >>> x.prod(dim=1) jt.Var([175 175], dtype=int32) >>> x.prod(dim=1, keepdims=True) jt.Var([[175] [175]], dtype=int32)
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.product()¶
Document: *
Returns the product of all the elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[7 5 5] [5 7 5]], dtype=int32) >>> jt.prod(x) jt.Var([30625], dtype=int32) >>> x.prod() jt.Var([30625], dtype=int32) >>> x.prod(dim=1) jt.Var([175 175], dtype=int32) >>> x.prod(dim=1, keepdims=True) jt.Var([[175] [175]], dtype=int32)
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.raw_ptr¶
Declaration: inline uint64 raw_ptr()
- jittor_core.Var.reduce()¶
Declaration: VarHolder* reduce(VarHolder* x, NanoString op, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_(VarHolder* x, NanoString op, NanoVector dims=NanoVector(), bool keepdims=false)
- jittor_core.Var.reduce_add()¶
Document: *
Returns the sum of the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.sum(x) jt.Var([13], dtype=int32) >>> x.sum() jt.Var([13], dtype=int32) >>> x.sum(dim=1) jt.Var([7 6], dtype=int32) >>> x.sum(dim=1, keepdims=True) jt.Var([[7] [6]], dtype=int32)
Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_bitwise_and()¶
Declaration: VarHolder* reduce_bitwise_and(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_bitwise_or()¶
Declaration: VarHolder* reduce_bitwise_or(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_bitwise_xor()¶
Declaration: VarHolder* reduce_bitwise_xor(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_bitwise_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_logical_and()¶
Document: *
Tests if all elements in input evaluate to True.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(2, shape=(2, 3)) >>> x jt.Var([[1 1 1] [0 1 0]], dtype=int32) >>> jt.all_(x) jt.Var([False], dtype=int32) >>> x.all_() jt.Var([False], dtype=int32) >>> x.all_(dim=1) jt.Var([True False], dtype=int32) >>> x.all_(dim=1, keepdims=True) jt.Var([[True] [False]], dtype=int32)
Declaration: VarHolder* reduce_logical_and(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_and_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_and__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_logical_or()¶
Document: *
Tests if any elements in input evaluate to True.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(2, shape=(2, 3)) >>> x jt.Var([[1 0 1] [0 0 0]], dtype=int32) >>> jt.any_(x) jt.Var([True], dtype=int32) >>> x.any_() jt.Var([True], dtype=int32) >>> x.any_(dim=1) jt.Var([True False], dtype=int32) >>> x.any_(dim=1, keepdims=True) jt.Var([[True] [False]], dtype=int32)
Declaration: VarHolder* reduce_logical_or(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_or_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_or__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_logical_xor()¶
Declaration: VarHolder* reduce_logical_xor(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_logical_xor_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_logical_xor__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_maximum()¶
Document: *
Returns the maximum elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.max(x) jt.Var([4], dtype=int32) >>> x.max() jt.Var([4], dtype=int32) >>> x.max(dim=1) jt.Var([4 4], dtype=int32) >>> x.max(dim=1, keepdims=True) jt.Var([[4] [4]], dtype=int32)
Declaration: VarHolder* reduce_maximum(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_maximum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_maximum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_minimum()¶
Document: *
Returns the minimum elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.min(x) jt.Var([0], dtype=int32) >>> x.min() jt.Var([0], dtype=int32) >>> x.min(dim=1) jt.Var([1 0], dtype=int32) >>> x.min(dim=1, keepdims=True) jt.Var([[1] [0]], dtype=int32)
Declaration: VarHolder* reduce_minimum(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_minimum_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_minimum__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reduce_multiply()¶
Document: *
Returns the product of all the elements in the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[7 5 5] [5 7 5]], dtype=int32) >>> jt.prod(x) jt.Var([30625], dtype=int32) >>> x.prod() jt.Var([30625], dtype=int32) >>> x.prod(dim=1) jt.Var([175 175], dtype=int32) >>> x.prod(dim=1, keepdims=True) jt.Var([[175] [175]], dtype=int32)
Declaration: VarHolder* reduce_multiply(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_multiply_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_multiply__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.reindex()¶
Document: *
Reindex Operator is a one-to-many map operator. It performs equivalent Python-pseudo implementation below:
# input is x, output is y n = len(shape)-1 m = len(x.shape)-1 k = len(overflow_conditions)-1 y = np.zeros(shape, x.dtype) for i0 in range(shape[0]): # 1-st loop for i1 in range(shape[1]): # 2-nd loop ...... # many loops for in in range(shape[n]) # n+1 -th loop if is_overflow(i0,i1,...,in): y[i0,i1,...,in] = overflow_value else: # indexes[i] is a c++ style integer expression consisting of i0,i1,...,in y[i0,i1,...,in] = x[indexes[0],indexes[1],...,indexes[m]] # is_overflow is defined as following def is_overflow(i0,i1,...,in): return ( indexes[0] < 0 || indexes[0] >= x.shape[0] || indexes[1] < 0 || indexes[1] >= x.shape[1] || ...... indexes[m] < 0 || indexes[m] >= x.shape[m] || # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,in overflow_conditions[0] || overflow_conditions[1] || ...... overflow_conditions[k] )
[in] x: A input jittor Var
[in] shape: the output shape, a integer array
[in] indexes: array of c++ style integer expression, its length should be the same with the number of dimension of x, some buildin variables it can use are:
XDIM, xshape0, ..., xshapen, xstride0, ..., xstriden YDIM, yshape0, ..., yshapem, ystride0, ..., ystridem i0, i1, ..., in @e0(...), @e1(...) for extras input index e0p, e1p , ... for extras input pointer
[in] overflow_value: overflow value
[in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes
[in] extras: extra var used for index
Example Convolution implemented by reindex operation:
def conv(x, w): N,H,W,C = x.shape Kh, Kw, _C, Kc = w.shape assert C==_C xx = x.reindex([N,H-Kh+1,W-Kw+1,Kh,Kw,C,Kc], [ 'i0', # Nid 'i1+i3', # Hid+Khid 'i2+i4', # Wid+KWid 'i5', # Cid ]) ww = w.broadcast_var(xx) yy = xx*ww y = yy.sum([3,4,5]) # Kh, Kw, C return y, yy
Declaration: VarHolder* reindex(VarHolder* x, NanoVector shape, vector<string>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={}, vector<VarHolder*>&& extras={})
Document: * Alias x.reindex([i,j,k]) ->
x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])
Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={})
- jittor_core.Var.reindex_reduce()¶
Document: *
Reindex Reduce Operator is a many-to-one map operator. It performs equivalent Python-pseudo implementation below:
# input is y, output is x n = len(y.shape)-1 m = len(shape)-1 k = len(overflow_conditions)-1 x = np.zeros(shape, y.dtype) x[:] = initial_value(op) for i0 in range(y.shape[0]): # 1-st loop for i1 in range(y.shape[1]): # 2-nd loop ...... # many loops for in in range(y.shape[n]) # n+1 -th loop # indexes[i] is a c++ style integer expression consisting of i0,i1,...,in xi0,xi1,...,xim = indexes[0],indexes[1],...,indexes[m] if not is_overflow(xi0,xi1,...,xim): x[xi0,xi1,...,xim] = op(x[xi0,xi1,...,xim], y[i0,i1,...,in]) # is_overflow is defined as following def is_overflow(xi0,xi1,...,xim): return ( xi0 < 0 || xi0 >= shape[0] || xi1 < 0 || xi1 >= shape[1] || ...... xim < 0 || xim >= shape[m] || # overflow_conditions[i] is a c++ style boolean expression consisting of i0,i1,...,in overflow_conditions[0] || overflow_conditions[1] || ...... overflow_conditions[k] )
[in] y: A input jittor Var
[in] op: a string represent the reduce operation type
[in] shape: the output shape, a integer array
[in] indexes: array of c++ style integer expression, its length should be the same with length of output shape, some buildin variables it can use are:
XDIM, xshape0, ..., xshapem, xstride0, ..., xstridem YDIM, yshape0, ..., yshapen, ystride0, ..., ystriden i0, i1, ..., in @e0(...), @e1(...) for extras input index e0p, e1p , ... for extras input pointer
[in] overflow_conditions: array of c++ style boolean expression, it length can be vary. the buildin variables it can use are the same with indexes.
[in] extras: extra var used for index
Example
Pooling implemented by reindex operation:
def pool(x, size, op): N,H,W,C = x.shape h = (H+size-1)//size w = (W+size-1)//size return x.reindex_reduce(op, [N,h,w,C], [ "i0", # Nid f"i1/{size}", # Hid f"i2/{size}", # Wid "i3", # Cid ])
Declaration: VarHolder* reindex_reduce(VarHolder* y, NanoString op, NanoVector shape, vector<string>&& indexes, vector<string>&& overflow_conditions={}, vector<VarHolder*>&& extras={})
- jittor_core.Var.reindex_var()¶
Document: * Alias x.reindex([i,j,k]) ->
x.reindex(i.shape, [‘@e0(…)’,’@e1(…)’,’@e2(…)’,], extras=[i,j,k])
Declaration: VarHolder* reindex_(VarHolder* x, vector<VarHolder*>&& indexes, float64 overflow_value=0, vector<string>&& overflow_conditions={})
- jittor_core.Var.requires_grad¶
Document: *
return True if the Var requires gradient calculation.
@see is_stop_grad
Declaration: inline bool get_requires_grad()
- jittor_core.Var.right_shift()¶
Document: *
Shifts the bits of
x
to the right byy
.This operation is equivalent to
x >> y
.[in] x: the first input, a python number or jt.Var (int32 or int64).
[in] y: the second input, a python number or jt.Var (int32 or int64).
- Example-1::
>>> a = jt.randint(0, 1024, shape=(3,)) >>> a jt.Var([439 113 92], dtype=int32) >>> b = jt.randint(0, 10, shape=(3,)) >>> b jt.Var([6 8 4], dtype=int32) >>> jt.right_shift(a, b) jt.Var([6 0 5], dtype=int32)
Declaration: VarHolder* right_shift(VarHolder* x, VarHolder* y)
- jittor_core.Var.round()¶
Document: *
Returns the closest integer of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 2.101595 0.33055413 -0.44147047 -0.7720668 ], dtype=float32) >>> jt.round(x) jt.Var([ 2.0 0.0 0.0 -1.0], dtype=float32) >>> x.round() jt.Var([ 2.0 0.0 0.0 -1.0], dtype=float32)
Declaration: VarHolder* round(VarHolder* x)
- jittor_core.Var.round_int()¶
Document: *
Returns the closest integer of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 2.101595 0.33055413 -0.44147047 -0.7720668 ], dtype=float32) >>> jt.round_int(x) jt.Var([ 2 0 0 -1], dtype=int32) >>> x.round_int jt.Var([ 2 0 0 -1], dtype=int32)
Declaration: VarHolder* round_int(VarHolder* x)
- jittor_core.Var.safe_clip()¶
Document: * Safe clip value to a range, and keep
the gradient pass thought.
[in] x: input value
[in] left: float64 clip min value.
[in] right: float64 clip max value.
Declaration: VarHolder* safe_clip(VarHolder* x, float64 left=-1e300, float64 right=1e300)
- jittor_core.Var.setitem()¶
Declaration: VarHolder* setitem(VarHolder* x, VarSlices&& slices, VarHolder* y, NanoString op=ns_void)
- jittor_core.Var.shape¶
Document: *
return the shape of the Var.
Declaration: inline NanoVector shape()
Declaration: inline VarHolder* share_with(VarHolder* other)
- jittor_core.Var.sigmoid()¶
Document: *
Returns the sigmoid of the input
x
.\[out_i = \frac{1}{1 + e^{x_i}}\][in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.49443012 0.4305426 -1.0364404 -1.2628382 ], dtype=float32) >>> jt.sigmoid(x) jt.Var([0.62114954 0.6060032 0.2618374 0.2204857 ], dtype=float32) >>> x.sigmoid() jt.Var([0.62114954 0.6060032 0.2618374 0.2204857 ], dtype=float32)
Declaration: VarHolder* sigmoid(VarHolder* x)
- jittor_core.Var.sin()¶
Document: *
Returns the sine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.sin(x) jt.Var([ 0.32303742 -0.6527857 -0.76586854 0.9738172 ], dtype=float32) >>> x.sin() jt.Var([ 0.32303742 -0.6527857 -0.76586854 0.9738172 ], dtype=float32)
Declaration: VarHolder* sin(VarHolder* x)
- jittor_core.Var.sinh()¶
Document: *
Returns the hyperbolic sine of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.sinh(x) jt.Var([ 0.3349012 -0.77276015 -0.9873369 2.9425898 ], dtype=float32) >>> x.sinh jt.Var([ 0.3349012 -0.77276015 -0.9873369 2.9425898 ], dtype=float32)
Declaration: VarHolder* sinh(VarHolder* x)
- jittor_core.Var.sqrt()¶
Document: *
Returns the square root of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.rand(4) * 2 >>> x jt.Var([0.81957287 0.5609612 0.07435933 1.7571875 ], dtype=float32) >>> jt.sqrt(x) jt.Var([0.90530264 0.7489734 0.27268907 1.3255895 ], dtype=float32) >>> x.sqrt() jt.Var([0.90530264 0.7489734 0.27268907 1.3255895 ], dtype=float32)
Declaration: VarHolder* sqrt(VarHolder* x)
- jittor_core.Var.start_grad()¶
Document: *
enable the gradient calculation for the Var.
Declaration: inline VarHolder* start_grad()
- jittor_core.Var.stop_fuse()¶
Document: *
stop operator fusion.
Declaration: inline VarHolder* stop_fuse()
- jittor_core.Var.stop_grad()¶
Document: *
disable the gradient calculation for the Var.
Declaration: inline VarHolder* stop_grad()
- jittor_core.Var.sub()¶
Document: *
Element-wise subtract
y
fromx
and returns a new Var.This operation is equivalent to
x - y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* subtract(VarHolder* x, VarHolder* y)
- jittor_core.Var.subtract()¶
Document: *
Element-wise subtract
y
fromx
and returns a new Var.This operation is equivalent to
x - y
.[in] x: the first input, a python number or jt.Var.
[in] y: the second input, a python number or jt.Var.
Declaration: VarHolder* subtract(VarHolder* x, VarHolder* y)
- jittor_core.Var.sum()¶
Document: *
Returns the sum of the input.
[in] x: the input jt.Var.
[in] dim or dims: int or tuples of ints (optional). If specified, reduce along the given the dimension(s).
[in] keepdims: bool (optional). Whether the output has
dim
retained or not. Defaults to be False.
- Example-1::
>>> x = jt.randint(10, shape=(2, 3)) >>> x jt.Var([[4 1 2] [0 2 4]], dtype=int32) >>> jt.sum(x) jt.Var([13], dtype=int32) >>> x.sum() jt.Var([13], dtype=int32) >>> x.sum(dim=1) jt.Var([7 6], dtype=int32) >>> x.sum(dim=1, keepdims=True) jt.Var([[7] [6]], dtype=int32)
Declaration: VarHolder* reduce_add(VarHolder* x, int dim, bool keepdims=false)
Declaration: VarHolder* reduce_add_(VarHolder* x, NanoVector dims=NanoVector(), bool keepdims=false)
Declaration: VarHolder* reduce_add__(VarHolder* x, uint dims_mask, uint keepdims_mask)
- jittor_core.Var.swap()¶
Document: *
swap the data with another Var.
Declaration: inline VarHolder* swap(VarHolder* v)
- jittor_core.Var.sync()¶
Declaration: VarHolder* sync(bool device_sync = false, bool weak_sync = true)
- jittor_core.Var.tan()¶
Document: *
Returns the tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([ 0.32893723 -0.7112559 -0.872391 1.8001337 ], dtype=float32) >>> jt.tan(x) jt.Var([ 0.34133783 -0.8617148 -1.1910915 -4.283673 ], dtype=float32) >>> x.tan() jt.Var([ 0.34133783 -0.8617148 -1.1910915 -4.283673 ], dtype=float32)
Declaration: VarHolder* tan(VarHolder* x)
- jittor_core.Var.tanh()¶
Document: *
Returns the hyperbolic tangent of the input
x
.[in] x: the input jt.Var.
- Example-1::
>>> x = jt.randn(4) >>> x jt.Var([-0.85885596 1.187804 0.47249675 0.95933187], dtype=float32) >>> jt.tanh(x) jt.Var([-0.6956678 0.82989657 0.4402144 0.7439787 ], dtype=float32) >>> x.tanh() jt.Var([-0.6956678 0.82989657 0.4402144 0.7439787 ], dtype=float32)
Declaration: VarHolder* tanh(VarHolder* x)
- jittor_core.Var.tape()¶
Declaration: VarHolder* tape(VarHolder* x)
- jittor_core.Var.ternary()¶
Declaration: VarHolder* ternary(VarHolder* cond, VarHolder* x, VarHolder* y)
- jittor_core.Var.uint16()¶
Document: *
Returns a copy of the input var, casted to unsigned int16.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.uint16() jt.Var([4 2 8], dtype=uint16) >>> jt.uint16(x) jt.Var([4 2 8], dtype=uint16)
Declaration: VarHolder* uint16_(VarHolder* x)
- jittor_core.Var.uint32()¶
Document: *
Returns a copy of the input var, casted to unsigned int32.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.uint32() jt.Var([4 2 8], dtype=uint32) >>> jt.uint32(x) jt.Var([4 2 8], dtype=uint32)
Declaration: VarHolder* uint32_(VarHolder* x)
- jittor_core.Var.uint64()¶
Document: *
Returns a copy of the input var, casted to unsigned int64.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.uint64() jt.Var([4 2 8], dtype=uint64) >>> jt.uint64(x) jt.Var([4 2 8], dtype=uint64)
Declaration: VarHolder* uint64_(VarHolder* x)
- jittor_core.Var.uint8()¶
Document: *
Returns a copy of the input var, casted to unsigned int8.
[in] x: the input jt.Var
- Example-1::
>>> x = jt.rand(3) * 10 >>> x jt.Var([4.093273 2.0086648 8.474352 ], dtype=float32) >>> x.uint8() jt.Var([4 2 8], dtype=uint8) >>> jt.uint8(x) jt.Var([4 2 8], dtype=uint8)
Declaration: VarHolder* uint8_(VarHolder* x)
- jittor_core.Var.unary()¶
Declaration: VarHolder* unary(VarHolder* x, NanoString op)
- jittor_core.Var.uncertain_shape¶
Declaration: inline NanoVector uncertain_shape()
- jittor_core.Var.update()¶
Document: *
update parameter and global variable,
different from assign, it will
stop grad between origin var and assigned var, and
will update in the background
Declaration: VarHolder* update(VarHolder* v)
- jittor_core.Var.where()¶
Document: *
Where Operator generate index of true condition.
[in] cond: condition for index generation
[in] dtype: type of return indexes
[out] out: return an array of indexes, same length with number of dims of cond
Example:
jt.where([[0,0,1],[1,0,0]]) # return [jt.Var([0 1], dtype=int32), jt.Var([2 0], dtype=int32)]
Declaration: vector_to_tuple<VarHolder*> where(VarHolder* cond, NanoString dtype=ns_int32)
Document: *
Condition operator, perform cond ? x : y
Declaration: VarHolder* where_(VarHolder* cond, VarHolder* x, VarHolder* y)
jittor.Misc¶
这里是Jittor的基础算子模块的API文档,该API可以通过jittor.misc.XXX
或者jittor.XXX
直接访问。
- class jittor.misc.CTCLoss(blank=0, reduction='mean', zero_infinity=False)[源代码]¶
The Connectionist Temporal Classification loss.
- Reference:
A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf
Args:
blank (int, default 0): blank label index reduction (string): reduce batch loss,
if reduction is none, it will return (N,) array, if reduction is mean or sum, it will return one scalar
- zero_infinity (bool, default False):
zero_infinity for grad
Input:
log_probs: shape is [T, N, C], T is the sequence length, N is the batch size, C is the class number. targets: shape is [N, S], N is the batch size, S is the target sequence length, element should between [0,C). input_lengths: shape is [N], which represents the length of input, element should between [0,T]. target_lengths: shape is N, which represents the length of target, element should between [0,S].
Example:
import jittor as jt T = 50 # Input sequence length C = 20 # Number of classes (including blank) N = 16 # Batch size S = 30 # Target sequence length of longest target in batch (padding length) S_min = 10 # Minimum target length, for demonstration purposes
input = jt.randn(T, N, C).log_softmax(2) # Initialize random batch of targets (0 = blank, 1:C = classes) target = jt.randint(low=1, high=C, shape=(N, S), dtype=jt.int)
input_lengths = jt.full((N,), T, dtype=jt.int) target_lengths = jt.randint(low=S_min, high=S+1, shape=(N,), dtype=jt.int) ctc_loss = jt.CTCLoss() loss = ctc_loss(input, target, input_lengths, target_lengths)
dinput = jt.grad(loss, input)
- jittor.misc.atan2(y, x)¶
- jittor.misc.auto_parallel(n, src, block_num=1024, **kw)[源代码]¶
auto parallel(CPU and GPU) n-d for loop function like below:
Before:
- void inner_func(int n0, int i0, int n1, int i1) {
…
}
- for (int i0=0; i0<n0; i0++)
- for (int i1=0; i1<n1; i1++)
inner_func(n0, i0, n1, i1, …);
After:
@python.jittor.auto_parallel(2) void inner_func(int n0, int i0, int n1, int i1) {
…
}
inner_func(n0, 0, n1, 0, …);
- jittor.misc.chunk(x, chunks, dim=0)[源代码]¶
Splits a var into a specific number of chunks. Each chunk is a view of the input var.
Last chunk will be smaller if the var size along the given dimension dim is not divisible by chunks.
Args:
input (var) – the var to split.
chunks (int) – number of chunks to return.
dim (int) – dimension along which to split the var.
Example:
>>> x = jt.random((10,3,3))
>>> res = jt.chunk(x, 2, 0)
>>> print(res[0].shape, res[1].shape) [5,3,3,] [5,3,3,]
- jittor.misc.cross(input, other, dim=- 1)[源代码]¶
Returns the cross product of vectors in dimension dim of input and other.
the cross product can be calculated by (a1,a2,a3) x (b1,b2,b3) = (a2b3-a3b2, a3b1-a1b3, a1b2-a2b1)
input and other must have the same size, and the size of their dim dimension should be 3.
If dim is not given, it defaults to the first dimension found with the size 3.
Args:
input (Tensor) – the input tensor.
other (Tensor) – the second input tensor
dim (int, optional) – the dimension to take the cross-product in.
out (Tensor, optional) – the output tensor.
Example:
>>> input = jt.random((6,3))
>>> other = jt.random((6,3))
>>> jt.cross(input, other, dim=1) [[-0.42732686 0.6827885 -0.49206433] [ 0.4651107 0.27036983 -0.5580432 ] [-0.31933784 0.10543461 0.09676848] [-0.58346975 -0.21417202 0.55176204] [-0.40861478 0.01496297 0.38638002] [ 0.18393655 -0.04907863 -0.17928357]]
>>> jt.cross(input, other) [[-0.42732686 0.6827885 -0.49206433] [ 0.4651107 0.27036983 -0.5580432 ] [-0.31933784 0.10543461 0.09676848] [-0.58346975 -0.21417202 0.55176204] [-0.40861478 0.01496297 0.38638002] [ 0.18393655 -0.04907863 -0.17928357]]
- jittor.misc.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False)[源代码]¶
The Connectionist Temporal Classification loss.
- Reference:
A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf
Input:
log_probs: shape is [T, N, C], T is the sequence length, N is the batch size, C is the class number. targets: shape is [N, S], N is the batch size, S is the target sequence length, element should between [0,C). input_lengths: shape is [N], which represents the length of input, element should between [0,T]. target_lengths: shape is N, which represents the length of target, element should between [0,S]. blank (int, default 0): blank label index reduction (string): reduce batch loss,
if reduction is none, it will return (N,) array, if reduction is mean or sum, it will return one scalar
- zero_infinity (bool, default False):
zero_infinity for grad
Example:
import jittor as jt T = 50 # Input sequence length C = 20 # Number of classes (including blank) N = 16 # Batch size S = 30 # Target sequence length of longest target in batch (padding length) S_min = 10 # Minimum target length, for demonstration purposes
input = jt.randn(T, N, C).log_softmax(2) # Initialize random batch of targets (0 = blank, 1:C = classes) target = jt.randint(low=1, high=C, shape=(N, S), dtype=jt.int)
input_lengths = jt.full((N,), T, dtype=jt.int) target_lengths = jt.randint(low=S_min, high=S+1, shape=(N,), dtype=jt.int) loss = jt.ctc_loss(input, target, input_lengths, target_lengths)
dinput = jt.grad(loss, input)
- jittor.misc.cub_cumsum(x, dim=None)[源代码]¶
cumsum implemented with CUB.
This function should not be called directly. Instead, jittor.misc.cumsum is recommended.
- jittor.misc.expand(x, *shape)[源代码]¶
Expand and broadcast this array, -1 represents this dimension is not changed.
Example:
a = jt.zeros((3,1)) b = a.expand(3, 4) assert b.shape == (3,4) b = a.expand(-1, 4) assert b.shape == (3,4) b = a.expand((3, 4)) assert b.shape == (3,4) b = a.expand((-1, 4)) assert b.shape == (3,4)
- jittor.misc.flip(x, dim=0)[源代码]¶
Reverse the order of a n-D var along given axis in dims.
Args:
input (var) – the input var.
dims (a list or tuple) – axis to flip on.
Example:
>>> x = jt.array([[1,2,3,4]])
>>> x.flip(1) [[4 3 2 1]]
- jittor.misc.gather(x, dim, index)[源代码]¶
if x is a 3-D array, reindex x like:
out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0 out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1 out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2
Parameters:
* x (jt.Var) – the source array * dim (int) – the axis along which to index * index (jt.Var) – the indices of elements to gather
Example:
t = jt.array([[1, 2], [3, 4]]) data = t.gather(1, jt.array([[0, 0], [1, 0]])) assert (data.data == [[ 1, 1], [ 4, 3]]).all() data = t.gather(0, jt.array([[0, 0], [1, 0]])) assert (data.data == [[ 1, 2], [ 3, 2]]).all()
- jittor.misc.get_max_memory_treemap(build_by=0, do_print=True)[源代码]¶
show treemap of max memory consumption
Example:
net = jt.models.resnet18() with jt.flag_scope(trace_py_var=3, profile_memory_enable=1): imgs = jt.randn((1,3,224,224)) net(imgs).sync() jt.get_max_memory_treemap()
Output:
| ├─./python/jittor/test/test_memory_profiler.py:100(test_sample) | [19.03 MB; 29.67%] | ./python/jittor/test/test_memory_profiler.py:100 | | | └─./python/jittor/__init__.py:730(__call__) | [19.03 MB; 29.67%] | ./python/jittor/__init__.py:730 | | | └─./python/jittor/models/resnet.py:152(execute) | [19.03 MB; 29.67%] | ./python/jittor/models/resnet.py:152 | | | ├─./python/jittor/models/resnet.py:142(_forward_impl) | | [6.13 MB; 9.55%] | | ./python/jittor/models/resnet.py:142 | | |
- jittor.misc.histc(input, bins, min=0.0, max=0.0)[源代码]¶
Return the histogram of the input N-d array.
- 参数
input – the input array.
bins – number of bins.
min – min of the range.
max – max of the range.
Example:
inputs = jt.randn((40,40)) joup = jt.histc(x, bins=10)
- jittor.misc.index_add_(x, dim, index, tensor)[源代码]¶
Take out each index subscript vector of the dim dimension and add the corresponding tensor variable.
Example:
x = jt.ones((5,3)) tensor = jt.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) index = jt.array([0,4,2]) x.index_add_(0, index, tensor) print(x)
>>> jt.Var([[ 2., 3., 4.], [ 1., 1., 1.], [ 8., 9., 10.], [ 1., 1., 1.], [ 5., 6., 7.]])
- jittor.misc.index_fill_(x, dim, indexs, val)[源代码]¶
Fills the elements of the input tensor with value val by selecting the indices in the order given in index.
- Args:
x - the input tensor dim - dimension along which to index index – indices of input tensor to fill in val – the value to fill with
- jittor.misc.index_select(x: jittor_core.jittor_core.Var, dim: int, index: jittor_core.jittor_core.Var) jittor_core.jittor_core.Var [源代码]¶
Returns a new var which indexes the x var along dimension dim using the entries in index.
The returned var has the same number of dimensions as the original var (x). The dimth dimension has the same size as the length of index; other dimensions have the same size as in the original tensor.
- param x
the input tensor.
- param dim
the dimension to index.
- param index
the 1-D tensor containing the indices to index.
Example:
x = jt.randn(3, 4) indices = torch.tensor([2, 1]) y = jt.index_select(x, 0, indices) assert jt.all_equal(y, x[indices]) y = jt.index_select(x, 1, indices) assert jt.all_equal(y, x[:, indices])
- jittor.misc.knn(unknown, known, k)[源代码]¶
find k neighbors for unknown array from known array
Args:
unknown (var): shape [b, n, c] known (var): shape [b, m, c] k (int)
- jittor.misc.make_grid(x, nrow=8, padding=2, normalize=False, range=None, scale_each=False, pad_value=0)[源代码]¶
- jittor.misc.meshgrid(*tensors)[源代码]¶
Take N tensors, each of which can be 1-dimensional vector, and create N n-dimensional grids, where the i th grid is defined by expanding the i th input over dimensions defined by other inputs.
- jittor.misc.multinomial(weights: jittor_core.jittor_core.Var, num_samples: int, replacement: bool = False) jittor_core.jittor_core.Var [源代码]¶
Returns a var where each row contains num_samples indices sampled from the multinomial probability distribution located in the corresponding row of input weights.
- 参数
weights – the input probability.
num_samples – number of samples.
replacement – whether to draw with replacement or not.
Example:
weights = jt.float32([0, 10, 3, 0]) x = jt.multinomial(weights, 2) assert jt.all_equal(x, [1, 2]) or jt.all_equal(x, [2, 1]) x = jt.multinomial(weights, 4, replacement=True) assert x.shape == (4, ) weights = jt.float32([[0,0,2],[0,1,0], [0.5,0,0]]) x = jt.multinomial(weights, 1) assert jt.all_equal(x, [[2],[1],[0]])
- jittor.misc.nms(dets, thresh)[源代码]¶
dets jt.array [x1,y1,x2,y2,score] x(:,0)->x1,x(:,1)->y1,x(:,2)->x2,x(:,3)->y2,x(:,4)->score
- jittor.misc.nonzero(x)[源代码]¶
Return the index of the elements of input tensor which are not equal to zero.
- jittor.misc.normalize(input, p=2, dim=1, eps=1e-30)[源代码]¶
Performs L_p normalization of inputs over specified dimension.
Args:
input – input array of any shape
p (float) – the exponent value in the norm formulation. Default: 2
dim (int) – the dimension to reduce. Default: 1
eps (float) – small value to avoid division by zero. Default: 1e-12
Example:
>>> x = jt.random((6,3)) [[0.18777736 0.9739261 0.77647036] [0.13710196 0.27282116 0.30533272] [0.7272278 0.5174613 0.9719775 ] [0.02566639 0.37504175 0.32676998] [0.0231761 0.5207773 0.70337296] [0.58966476 0.49547017 0.36724383]]
>>> jt.normalize(x) [[0.14907198 0.7731768 0.61642134] [0.31750825 0.63181424 0.7071063 ] [0.5510936 0.39213243 0.736565 ] [0.05152962 0.7529597 0.656046 ] [0.02647221 0.59484214 0.80340654] [0.6910677 0.58067477 0.4303977 ]]
- jittor.misc.numpy_cumsum(x, dim=None)[源代码]¶
cumsum implemented with numpy or cupy.
This function should not be called directly. Instead, jittor.misc.cumsum is recommended.
- jittor.misc.repeat(x, *shape)[源代码]¶
Repeats this var along the specified dimensions.
Args:
x (var): jittor var.
shape (tuple): int or tuple. The number of times to repeat this var along each dimension.
Example:
>>> x = jt.array([1, 2, 3])
>>> x.repeat(4, 2) [[ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3], [ 1, 2, 3, 1, 2, 3]]
>>> x.repeat(4, 2, 1).size() [4, 2, 3,]
- jittor.misc.roll(x, shifts, dims=None)[源代码]¶
Roll the tensor along the given dimension(s).
Parameters:
* x (jt.Var) – the source array * shifts (int or tuple) – shift offset of dims * dims (int or tuple) – shift dims
Examples:
x = jt.array([1, 2, 3, 4, 5, 6, 7, 8]).view(4, 2) y = x.roll(1, 0) assert (y.numpy() == [[7,8],[1,2],[3,4],[5,6]]).all() y = x.roll(-1, 0) assert (y.numpy() == [[3,4],[5,6],[7,8],[1,2]]).all() y = x.roll(shifts=(2, 1), dims=(0, 1)) assert (y.numpy() == [[6,5],[8,7],[2,1],[4,3]]).all()
- jittor.misc.save_image(x, filepath, nrow: int = 8, padding: int = 2, normalize: bool = False, range=None, scale_each=False, pad_value=0, format=None)[源代码]¶
- jittor.misc.scatter(x: jittor_core.jittor_core.Var, dim: int, index: jittor_core.jittor_core.Var, src: jittor_core.jittor_core.Var, reduce='void')[源代码]¶
if x is a 3-D array, rewrite x like:
self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0 self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1 self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2
Parameters:
* x (jt.Var) – input array * dim (int) – the axis along which to index * index (jt.Var) – the indices of elements to scatter, can be either empty or of the same dimensionality as src. When empty, the operation returns self unchanged. * src (jt.Var) – the source element(s) to scatter. * reduce (str, optional) – reduction operation to apply, can be either 'add' or 'multiply'.
Example:
src = jt.arange(1, 11).reshape((2, 5)) index = jt.array([[0, 1, 2, 0]]) x = jt.zeros((3, 5), dtype=src.dtype).scatter_(0, index, src) assert (x.data == [[1, 0, 0, 4, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0]]).all() index = jt.array([[0, 1, 2], [0, 1, 4]]) x = jt.zeros((3, 5), dtype=src.dtype).scatter_(1, index, src) assert (x.data == [[1, 2, 3, 0, 0], [6, 7, 0, 0, 8], [0, 0, 0, 0, 0]]).all() x = jt.full((2, 4), 2.).scatter_(1, jt.array([[2], [3]]), jt.array(1.23), reduce='multiply') assert np.allclose(x.data, [[2.0000, 2.0000, 2.4600, 2.0000], [2.0000, 2.0000, 2.0000, 2.4600]]), x x = jt.full((2, 4), 2.).scatter_(1, jt.array([[2], [3]]), jt.array(1.23), reduce='add') assert np.allclose(x.data, [[2.0000, 2.0000, 3.2300, 2.0000], [2.0000, 2.0000, 2.0000, 3.2300]])
- jittor.misc.searchsorted(sorted, values, right=False)[源代码]¶
Find the indices from the innermost dimension of sorted for each values.
Example:
sorted = jt.array([[1, 3, 5, 7, 9], [2, 4, 6, 8, 10]]) values = jt.array([[3, 6, 9], [3, 6, 9]]) ret = jt.searchsorted(sorted, values) assert (ret == [[1, 3, 4], [1, 2, 4]]).all(), ret ret = jt.searchsorted(sorted, values, right=True) assert (ret == [[2, 3, 5], [1, 3, 4]]).all(), ret sorted_1d = jt.array([1, 3, 5, 7, 9]) ret = jt.searchsorted(sorted_1d, values) assert (ret == [[1, 3, 4], [1, 3, 4]]).all(), ret
- jittor.misc.set_global_seed(seed, different_seed_for_mpi=True)[源代码]¶
Sets the seeds of the random number generators of Python, numpy and jittor, simultaneously.
Jittor also gurantees each worker of jittor.dataset.Dataset to hold a different seed, also gurantees each process hold a different seed which using mpi, which is (global_seed ^ (worker_id*1167)) ^ 1234 + jt.rank * 2591
- jittor.misc.split(d, split_size, dim=0)[源代码]¶
Splits the tensor into chunks. Each chunk is a view of the original tensor.
If split_size is an integer type, then tensor will be split into equally sized chunks (if possible). Last chunk will be smaller if the tensor size along the given dimension dim is not divisible by split_size.
If split_size is a list, then tensor will be split into len(split_size) chunks with sizes in dim according to split_size_or_sections.
- Args:
d (Tensor) – tensor to split.
split_size (int) or (list(int)) – size of a single chunk or list of sizes for each chunk
dim (int) – dimension along which to split the tensor.
- jittor.misc.stack(x, dim=0)[源代码]¶
Concatenates sequence of vars along a new dimension.
All vars need to be of the same size.
Args:
x (sequence of vars) – sequence of vars to concatenate.
dim (int) – dimension to insert. Has to be between 0 and the number of dimensions of concatenated vars (inclusive).
Example:
>>> a1 = jt.array([[1,2,3]])
>>> a2 = jt.array([[4,5,6]])
>>> jt.stack([a1, a2], 0) [[[1 2 3] [[4 5 6]]]
- jittor.misc.tril(input: jittor_core.jittor_core.Var, diagonal: int = 0) jittor_core.jittor_core.Var [源代码]¶
Returns the lower triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0.
- 参数
input – the input tensor.
diagonal – the diagonal to consider(int).
Example:
a = jt.ones(3, 3) b = jt.tril(a) assert jt.all_equal(b, [[1,0,0],[1,1,0],[1,1,1]]) b = jt.tril(a, diagonal=1) assert jt.all_equal(b, [[1,1,0],[1,1,1],[1,1,1]]) b = jt.tril(a, diagonal=-1) assert jt.all_equal(b, [[0,0,0],[1,0,0],[1,1,0]])
- jittor.misc.triu(input: jittor_core.jittor_core.Var, diagonal: int = 0) jittor_core.jittor_core.Var [源代码]¶
Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0.
- 参数
input – the input tensor.
diagonal – the diagonal to consider(int).
Example:
a = jt.ones(3, 3) b = jt.triu(a) assert jt.all_equal(b, [[1,1,1],[0,1,1],[0,0,1]]) b = jt.triu(a, diagonal=1) assert jt.all_equal(b, [[0,1,1],[0,0,1],[0,0,0]]) b = jt.triu(a, diagonal=-1) assert jt.all_equal(b, [[1,1,1],[1,1,1],[0,1,1]])
- jittor.misc.triu_(x, diagonal=0)[源代码]¶
Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0.
The upper triangular part of the matrix is defined as the elements on and above the diagonal.
- Args:
x – the input tensor.
diagonal – the diagonal to consider,default =0
- jittor.misc.unbind(x, dim=0)[源代码]¶
Removes a var dimension.
Returns a tuple of all slices along a given dimension, already without it.
Args:
input (var) – the var to unbind
dim (int) – dimension to remove
Example:
a = jt.random((3,3)) b = jt.unbind(a, 0)
- jittor.misc.unique(input: jittor_core.jittor_core.Var, return_inverse: bool = False, return_counts: bool = False, dim: Optional[int] = None)[源代码]¶
Returns the unique elements of the input tensor.
Args:
input (var) – the input var
return_inverse (bool) – Whether to also return the indices for where elements in the original input ended up in the returned unique list. default: False
return_counts (bool) – Whether to also return the counts for each unique element. default: False
dim (int) – the dimension to apply unique. If None, the unique of the flattened input is returned. default: None
Example:
>>> jittor.unique(jittor.array([1, 3, 2, 3])) jt.Var([1 2 3], dtype=int32)
>>> jittor.unique(jittor.array([1, 3, 2, 3, 2]), return_inverse=True, return_counts=True) (jt.Var([1 2 3], dtype=int32), jt.Var([0 2 1 2 1], dtype=int32), jt.Var([1 2 2], dtype=int32))
>>> jittor.unique(jittor.array([[1, 3], [2, 3]]), return_inverse=True) (jt.Var([1 2 3], dtype=int32), jt.Var([[0 2] [1 2]], dtype=int32))
>>> jittor.unique(jittor.array([[1, 3], [1, 3]]), dim=0) jt.Var([[1 3]], dtype=int32)