The torch package contains data structures for multi-dimensional tensors and defines mathematical operations over these tensors. Additionally, it provides many utilities for efficient serialization of Tensors and arbitrary types, and other useful utilities.
It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0.
| Returns True if obj is a PyTorch tensor. | |
| Returns True if obj is a PyTorch storage object. | |
Returns True if the data type of input is a complex data type i.e., one of torch.complex64, and torch.complex128. |
|
Returns True if the input is a conjugated tensor, i.e. its conjugate bit is set to True. |
|
Returns True if the data type of input is a floating point data type i.e., one of torch.float64, torch.float32, torch.float16, and torch.bfloat16. |
|
Returns True if the input is a single element tensor which is not equal to zero after type conversions. |
|
Sets the default floating point dtype to d. |
|
Get the current default floating point torch.dtype. |
|
Sets the default torch.Tensor to be allocated on device. |
|
Gets the default torch.Tensor to be allocated on device |
|
Returns the total number of elements in the input tensor. |
|
| Set options for printing. | |
| Disables denormal floating numbers on CPU. |
Note
Random sampling creation ops are listed under Random sampling and include: torch.rand() torch.rand_like() torch.randn() torch.randn_like() torch.randint() torch.randint_like() torch.randperm() You may also use torch.empty() with the In-place random sampling methods to create torch.Tensor s with values sampled from a broader range of distributions.
Constructs a tensor with no autograd history (also known as a “leaf tensor”, see Autograd mechanics) by copying data. |
|
Constructs a sparse tensor in COO(rdinate) format with specified values at the given indices. |
|
Constructs a sparse tensor in CSR (Compressed Sparse Row) with specified values at the given crow_indices and col_indices. |
|
Constructs a sparse tensor in CSC (Compressed Sparse Column) with specified values at the given ccol_indices and row_indices. |
|
Constructs a sparse tensor in BSR (Block Compressed Sparse Row)) with specified 2-dimensional blocks at the given crow_indices and col_indices. |
|
Constructs a sparse tensor in BSC (Block Compressed Sparse Column)) with specified 2-dimensional blocks at the given ccol_indices and row_indices. |
|
Converts obj to a tensor. |
|
Converts data into a tensor, sharing data and preserving autograd history if possible. |
|
Create a view of an existing torch.Tensor input with specified size, stride and storage_offset. |
|
| Creates a CPU tensor with a storage backed by a memory-mapped file. | |
Creates a Tensor from a numpy.ndarray. |
|
Converts a tensor from an external library into a torch.Tensor. |
|
Creates a 1-dimensional Tensor from an object that implements the Python buffer protocol. |
|
Returns a tensor filled with the scalar value 0, with the shape defined by the variable argument size. |
|
Returns a tensor filled with the scalar value 0, with the same size as input. |
|
Returns a tensor filled with the scalar value 1, with the shape defined by the variable argument size. |
|
Returns a tensor filled with the scalar value 1, with the same size as input. |
|
Returns a 1-D tensor of size ⌈end−startstep⌉ with values from the interval [start, end) taken with common difference step beginning from start. |
|
Returns a 1-D tensor of size ⌊end−startstep⌋+1 with values from start to end with step step. |
|
Creates a one-dimensional tensor of size steps whose values are evenly spaced from start to end, inclusive. |
|
Creates a one-dimensional tensor of size steps whose values are evenly spaced from basestart to baseend, inclusive, on a logarithmic scale with base base. |
|
| Returns a 2-D tensor with ones on the diagonal and zeros elsewhere. | |
| Returns a tensor filled with uninitialized data. | |
Returns an uninitialized tensor with the same size as input. |
|
Creates a tensor with the specified size and stride and filled with undefined data. |
|
Creates a tensor of size size filled with fill_value. |
|
Returns a tensor with the same size as input filled with fill_value. |
|
| Converts a float tensor to a quantized tensor with given scale and zero point. | |
| Converts a float tensor to a per-channel quantized tensor with given scales and zero points. | |
| Returns an fp32 Tensor by dequantizing a quantized Tensor | |
Constructs a complex tensor with its real part equal to real and its imaginary part equal to imag. |
|
Constructs a complex tensor whose elements are Cartesian coordinates corresponding to the polar coordinates with absolute value abs and angle angle. |
|
Computes the Heaviside step function for each element in input. |
| Returns a view of the tensor conjugated and with the last two dimensions transposed. | |
Returns a tensor containing the indices of all non-zero elements of input. |
|
Concatenates the given sequence of tensors in tensors in the given dimension. |
|
Alias of torch.cat(). |
|
Alias of torch.cat(). |
|
Returns a view of input with a flipped conjugate bit. |
|
| Attempts to split a tensor into the specified number of chunks. | |
Splits input, a tensor with three or more dimensions, into multiple tensors depthwise according to indices_or_sections. |
|
Creates a new tensor by horizontally stacking the tensors in tensors. |
|
| Stack tensors in sequence depthwise (along third axis). | |
| Gathers values along an axis specified by dim. | |
Splits input, a tensor with one or more dimensions, into multiple tensors horizontally according to indices_or_sections. |
|
| Stack tensors in sequence horizontally (column wise). | |
See index_add_() for function description. |
|
See index_add_() for function description. |
|
See index_reduce_() for function description. |
|
Returns a new tensor which indexes the input tensor along dimension dim using the entries in index which is a LongTensor. |
|
Returns a new 1-D tensor which indexes the input tensor according to the boolean mask mask which is a BoolTensor. |
|
Moves the dimension(s) of input at the position(s) in source to the position(s) in destination. |
|
Alias for torch.movedim(). |
|
Returns a new tensor that is a narrowed version of input tensor. |
|
Same as Tensor.narrow() except this returns a copy rather than shared storage. |
|
Returns a view of the original tensor input with its dimensions permuted. |
|
Returns a tensor with the same data and number of elements as input, but with the specified shape. |
|
Alias of torch.vstack(). |
|
Slices the input tensor along the selected dimension at the given index. |
|
Out-of-place version of torch.Tensor.scatter_() |
|
Embeds the values of the src tensor into input along the diagonal elements of input, with respect to dim1 and dim2. |
|
Embeds the values of the src tensor into input at the given index. |
|
Embeds the values of the src tensor into input at the given dimension. |
|
Out-of-place version of torch.Tensor.scatter_add_() |
|
Out-of-place version of torch.Tensor.scatter_reduce_() |
|
| Splits the tensor into chunks. | |
Returns a tensor with all specified dimensions of input of size 1 removed. |
|
| Concatenates a sequence of tensors along a new dimension. | |
Alias for torch.transpose(). |
|
Alias for torch.transpose(). |
|
Expects input to be <= 2-D tensor and transposes dimensions 0 and 1. |
|
Returns a new tensor with the elements of input at the given indices. |
|
Selects values from input at the 1-dimensional indices from indices along the given dim. |
|
Splits a tensor into multiple sub-tensors, all of which are views of input, along dimension dim according to the indices or number of sections specified by indices_or_sections. |
|
Constructs a tensor by repeating the elements of input. |
|
Returns a tensor that is a transposed version of input. |
|
| Removes a tensor dimension. | |
| Converts a tensor of flat indices into a tuple of coordinate tensors that index into an arbitrary tensor of the specified shape. | |
| Returns a new tensor with a dimension of size one inserted at the specified position. | |
Splits input, a tensor with two or more dimensions, into multiple tensors vertically according to indices_or_sections. |
|
| Stack tensors in sequence vertically (row wise). | |
Return a tensor of elements selected from either input or other, depending on condition. |
Within the PyTorch repo, we define an “Accelerator” as a torch.device that is being used alongside a CPU to speed up computation. These device use an asynchronous execution scheme, using torch.Stream and torch.Event as their main way to perform synchronization. We also assume that only one such accelerator can be available at once on a given host. This allows us to use the current accelerator as the default device for relevant concepts such as pinned memory, Stream device_type, FSDP, etc.
As of today, accelerator devices are (in no particular order) “CUDA”, “MTIA”, “XPU”, “MPS”, “HPU”, and PrivateUse1 (many device not in the PyTorch repo itself).
Many tools in the PyTorch Ecosystem use fork to create subprocesses (for example dataloading or intra-op parallelism), it is thus important to delay as much as possible any operation that would prevent further forks. This is especially important here as most accelerator’s initialization has such effect. In practice, you should keep in mind that checking torch.accelerator.current_accelerator() is a compile-time check by default, it is thus always fork-safe. On the contrary, passing the check_available=True flag to this function or calling torch.accelerator.is_available() will usually prevent later fork.
Some backends provide an experimental opt-in option to make the runtime availability check fork-safe. When using the CUDA device PYTORCH_NVML_BASED_CUDA_CHECK=1 can be used for example.
| Creates and returns a generator object that manages the state of the algorithm which produces pseudo random numbers. |
| Sets the seed for generating random numbers to a non-deterministic random number on all devices. | |
| Sets the seed for generating random numbers on all devices. | |
| Returns the initial seed for generating random numbers as a Python long. | |
| Returns the random number generator state as a torch.ByteTensor. | |
| Sets the random number generator state. |
| Draws binary random numbers (0 or 1) from a Bernoulli distribution. | |
Returns a tensor where each row contains num_samples indices sampled from the multinomial (a stricter definition would be multivariate, refer to torch.distributions.multinomial.Multinomial for more details) probability distribution located in the corresponding row of tensor input. |
|
| Returns a tensor of random numbers drawn from separate normal distributions whose mean and standard deviation are given. | |
Returns a tensor of the same size as input with each element sampled from a Poisson distribution with rate parameter given by the corresponding element in input i.e., |
|
| Returns a tensor filled with random numbers from a uniform distribution on the interval [0,1) | |
Returns a tensor with the same size as input that is filled with random numbers from a uniform distribution on the interval [0,1). |
|
Returns a tensor filled with random integers generated uniformly between low (inclusive) and high (exclusive). |
|
Returns a tensor with the same shape as Tensor input filled with random integers generated uniformly between low (inclusive) and high (exclusive). |
|
| Returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). | |
Returns a tensor with the same size as input that is filled with random numbers from a normal distribution with mean 0 and variance 1. |
|
Returns a random permutation of integers from 0 to n - 1. |
There are a few more in-place random sampling functions defined on Tensors as well. Click through to refer to their documentation:
torch.Tensor.bernoulli_() – in-place version of torch.bernoulli()torch.Tensor.cauchy_() – numbers drawn from the Cauchy distributiontorch.Tensor.exponential_() – numbers drawn from the exponential distributiontorch.Tensor.geometric_() – elements drawn from the geometric distributiontorch.Tensor.log_normal_() – samples from the log-normal distributiontorch.Tensor.normal_() – in-place version of torch.normal()torch.Tensor.random_() – numbers sampled from the discrete uniform distributiontorch.Tensor.uniform_() – numbers sampled from the continuous uniform distributionquasirandom.SobolEngine |
The torch.quasirandom.SobolEngine is an engine for generating (scrambled) Sobol sequences. |
| Saves an object to a disk file. | |
Loads an object saved with torch.save() from a file. |
| Returns the number of threads used for parallelizing CPU operations | |
| Sets the number of threads used for intraop parallelism on CPU. | |
| Returns the number of threads used for inter-op parallelism on CPU (e.g. | |
| Sets the number of threads used for interop parallelism (e.g. |
The context managers torch.no_grad(), torch.enable_grad(), and torch.set_grad_enabled() are helpful for locally disabling and enabling gradient computation. See Locally disabling gradient computation for more details on their usage. These context managers are thread local, so they won’t work if you send work to another thread using the threading module, etc.
Examples:
x = torch.zeros(1, requires_grad=True)
with torch.no_grad():
y = x * 2
y.requires_grad
False
is_train = False
with torch.set_grad_enabled(is_train):
y = x * 2
y.requires_grad
False
torch.set_grad_enabled(True) # this can also be used as a function
y = x * 2
y.requires_grad
True
torch.set_grad_enabled(False)
y = x * 2
y.requires_grad
False
| Context-manager that disables gradient calculation. | |
| Context-manager that enables gradient calculation. | |
autograd.grad_mode.set_grad_enabled |
Context-manager that sets gradient calculation on or off. |
| Returns True if grad mode is currently enabled. | |
autograd.grad_mode.inference_mode |
Context-manager that enables or disables inference mode. |
| Returns True if inference mode is currently enabled. |
inf |
A floating-point positive infinity. Alias for math.inf. |
nan |
A floating-point “not a number” value. This value is not a legal number. Alias for math.nan. |
Computes the absolute value of each element in input. |
|
Alias for torch.abs() |
|
Computes the inverse cosine of each element in input. |
|
Alias for torch.acos(). |
|
Returns a new tensor with the inverse hyperbolic cosine of the elements of input. |
|
Alias for torch.acosh(). |
|
Adds other, scaled by alpha, to input. |
|
Performs the element-wise division of tensor1 by tensor2, multiplies the result by the scalar value and adds it to input. |
|
Performs the element-wise multiplication of tensor1 by tensor2, multiplies the result by the scalar value and adds it to input. |
|
Computes the element-wise angle (in radians) of the given input tensor. |
|
Returns a new tensor with the arcsine of the elements of input. |
|
Alias for torch.asin(). |
|
Returns a new tensor with the inverse hyperbolic sine of the elements of input. |
|
Alias for torch.asinh(). |
|
Returns a new tensor with the arctangent of the elements of input. |
|
Alias for torch.atan(). |
|
Returns a new tensor with the inverse hyperbolic tangent of the elements of input. |
|
Alias for torch.atanh(). |
|
| Element-wise arctangent of inputi/otheri with consideration of the quadrant. | |
Alias for torch.atan2(). |
|
| Computes the bitwise NOT of the given input tensor. | |
Computes the bitwise AND of input and other. |
|
Computes the bitwise OR of input and other. |
|
Computes the bitwise XOR of input and other. |
|
Computes the left arithmetic shift of input by other bits. |
|
Computes the right arithmetic shift of input by other bits. |
|
Returns a new tensor with the ceil of the elements of input, the smallest integer greater than or equal to each element. |
|
Clamps all elements in input into the range [ min, max ]. |
|
Alias for torch.clamp(). |
|
Computes the element-wise conjugate of the given input tensor. |
|
Create a new floating-point tensor with the magnitude of input and the sign of other, elementwise. |
|
Returns a new tensor with the cosine of the elements of input. |
|
Returns a new tensor with the hyperbolic cosine of the elements of input. |
|
Returns a new tensor with each of the elements of input converted from angles in degrees to radians. |
|
Divides each element of the input input by the corresponding element of other. |
|
Alias for torch.div(). |
|
Alias for torch.special.digamma(). |
|
Alias for torch.special.erf(). |
|
Alias for torch.special.erfc(). |
|
Alias for torch.special.erfinv(). |
|
Returns a new tensor with the exponential of the elements of the input tensor input. |
|
Alias for torch.special.exp2(). |
|
Alias for torch.special.expm1(). |
|
Returns a new tensor with the data in input fake quantized per channel using scale, zero_point, quant_min and quant_max, across the channel specified by axis. |
|
Returns a new tensor with the data in input fake quantized using scale, zero_point, quant_min and quant_max. |
|
Alias for torch.trunc() |
|
Raises input to the power of exponent, elementwise, in double precision. |
|
Returns a new tensor with the floor of the elements of input, the largest integer less than or equal to each element. |
|
| Applies C++’s std::fmod entrywise. | |
Computes the fractional portion of each element in input. |
|
Decomposes input into mantissa and exponent tensors such that input=mantissa×2exponent. |
|
| Estimates the gradient of a function g:Rn→R in one or more dimensions using the second-order accurate central differences method and either first or second order estimates at the boundaries. | |
Returns a new tensor containing imaginary values of the self tensor. |
|
Multiplies input by 2 ** other. |
|
Does a linear interpolation of two tensors start (given by input) and end based on a scalar or tensor weight and returns the resulting out tensor. |
|
Computes the natural logarithm of the absolute value of the gamma function on input. |
|
Returns a new tensor with the natural logarithm of the elements of input. |
|
Returns a new tensor with the logarithm to the base 10 of the elements of input. |
|
Returns a new tensor with the natural logarithm of (1 + input). |
|
Returns a new tensor with the logarithm to the base 2 of the elements of input. |
|
| Logarithm of the sum of exponentiations of the inputs. | |
| Logarithm of the sum of exponentiations of the inputs in base-2. | |
| Computes the element-wise logical AND of the given input tensors. | |
| Computes the element-wise logical NOT of the given input tensor. | |
| Computes the element-wise logical OR of the given input tensors. | |
| Computes the element-wise logical XOR of the given input tensors. | |
Alias for torch.special.logit(). |
|
| Given the legs of a right triangle, return its hypotenuse. | |
Alias for torch.special.i0(). |
|
Alias for torch.special.gammainc(). |
|
Alias for torch.special.gammaincc(). |
|
Multiplies input by other. |
|
Alias for torch.mul(). |
|
Alias for torch.special.multigammaln(). |
|
Replaces NaN, positive infinity, and negative infinity values in input with the values specified by nan, posinf, and neginf, respectively. |
|
Returns a new tensor with the negative of the elements of input. |
|
Alias for torch.neg() |
|
Return the next floating-point value after input towards other, elementwise. |
|
Alias for torch.special.polygamma(). |
|
Returns input. |
|
Takes the power of each element in input with exponent and returns a tensor with the result. |
|
| Applies batch normalization on a 4D (NCHW) quantized tensor. | |
| Applies a 1D max pooling over an input quantized tensor composed of several input planes. | |
| Applies a 2D max pooling over an input quantized tensor composed of several input planes. | |
Returns a new tensor with each of the elements of input converted from angles in radians to degrees. |
|
Returns a new tensor containing real values of the self tensor. |
|
Returns a new tensor with the reciprocal of the elements of input |
|
| Computes Python’s modulus operation entrywise. | |
Rounds elements of input to the nearest integer. |
|
Returns a new tensor with the reciprocal of the square-root of each of the elements of input. |
|
Alias for torch.special.expit(). |
|
Returns a new tensor with the signs of the elements of input. |
|
| This function is an extension of torch.sign() to complex tensors. | |
Tests if each element of input has its sign bit set or not. |
|
Returns a new tensor with the sine of the elements of input. |
|
Alias for torch.special.sinc(). |
|
Returns a new tensor with the hyperbolic sine of the elements of input. |
|
Alias for torch.nn.functional.softmax(). |
|
Returns a new tensor with the square-root of the elements of input. |
|
Returns a new tensor with the square of the elements of input. |
|
Subtracts other, scaled by alpha, from input. |
|
Alias for torch.sub(). |
|
Returns a new tensor with the tangent of the elements of input. |
|
Returns a new tensor with the hyperbolic tangent of the elements of input. |
|
Alias for torch.div() with rounding_mode=None. |
|
Returns a new tensor with the truncated integer values of the elements of input. |
|
Alias for torch.special.xlogy(). |
Returns the indices of the maximum value of all elements in the input tensor. |
|
| Returns the indices of the minimum value(s) of the flattened tensor or along a dimension | |
Returns the maximum value of each slice of the input tensor in the given dimension(s) dim. |
|
Returns the minimum value of each slice of the input tensor in the given dimension(s) dim. |
|
Computes the minimum and maximum values of the input tensor. |
|
Tests if all elements in input evaluate to True. |
|
Tests if any element in input evaluates to True. |
|
Returns the maximum value of all elements in the input tensor. |
|
Returns the minimum value of all elements in the input tensor. |
|
Returns the p-norm of (input – other) |
|
Returns the log of summed exponentials of each row of the input tensor in the given dimension dim. |
|
| Computes the mean of all non-NaN elements along the specified dimensions. | |
Returns the median of the values in input. |
|
Returns the median of the values in input, ignoring NaN values. |
|
Returns a namedtuple (values, indices) where values is the mode value of each row of the input tensor in the given dimension dim, i.e. a value which appears most often in that row, and indices is the index location of each mode value found. |
|
| Returns the matrix norm or vector norm of a given tensor. | |
| Returns the sum of all elements, treating Not a Numbers (NaNs) as zero. | |
Returns the product of all elements in the input tensor. |
|
Computes the q-th quantiles of each row of the input tensor along the dimension dim. |
|
This is a variant of torch.quantile() that “ignores” NaN values, computing the quantiles q as if NaN values in input did not exist. |
|
Calculates the standard deviation over the dimensions specified by dim. |
|
Calculates the standard deviation and mean over the dimensions specified by dim. |
|
Returns the sum of all elements in the input tensor. |
|
| Returns the unique elements of the input tensor. | |
| Eliminates all but the first element from every consecutive group of equivalent elements. | |
Calculates the variance over the dimensions specified by dim. |
|
Calculates the variance and mean over the dimensions specified by dim. |
|
Counts the number of non-zero values in the tensor input along the given dim. |
This function checks if input and other satisfy the condition: |
|
| Returns the indices that sort a tensor along a given dimension in ascending order by value. | |
| Computes element-wise equality | |
True if two tensors have the same size and elements, False otherwise. |
|
| Computes input≥other element-wise. | |
Alias for torch.ge(). |
|
| Computes input>other element-wise. | |
Alias for torch.gt(). |
|
Returns a new tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. |
|
| Returns a new tensor with boolean elements representing if each element is finite or not. | |
Tests if each element of elements is in test_elements. |
|
Tests if each element of input is infinite (positive or negative infinity) or not. |
|
Tests if each element of input is positive infinity or not. |
|
Tests if each element of input is negative infinity or not. |
|
Returns a new tensor with boolean elements representing if each element of input is NaN or not. |
|
Returns a new tensor with boolean elements representing if each element of input is real-valued or not. |
|
Returns a namedtuple (values, indices) where values is the k th smallest element of each row of the input tensor in the given dimension dim. |
|
| Computes input≤other element-wise. | |
Alias for torch.le(). |
|
| Computes input<other element-wise. | |
Alias for torch.lt(). |
|
Computes the element-wise maximum of input and other. |
|
Computes the element-wise minimum of input and other. |
|
Computes the element-wise maximum of input and other. |
|
Computes the element-wise minimum of input and other. |
|
| Computes input≠other element-wise. | |
Alias for torch.ne(). |
|
Sorts the elements of the input tensor along a given dimension in ascending order by value. |
|
Returns the k largest elements of the given input tensor along a given dimension. |
|
Sorts the elements of the input tensor along its first dimension in ascending order by value. |
| Short-time Fourier transform (STFT). | |
| Inverse short time Fourier Transform. | |
| Bartlett window function. | |
| Blackman window function. | |
| Hamming window function. | |
| Hann window function. | |
Computes the Kaiser window with window length window_length and shape parameter beta. |
| Returns a 1-dimensional view of each input tensor with zero dimensions. | |
| Returns a 2-dimensional view of each input tensor with zero dimensions. | |
| Returns a 3-dimensional view of each input tensor with zero dimensions. | |
| Count the frequency of each value in an array of non-negative ints. | |
| Create a block diagonal matrix from provided tensors. | |
| Broadcasts the given tensors according to Broadcasting semantics. | |
Broadcasts input to the shape shape. |
|
Similar to broadcast_tensors() but for shapes. |
|
Returns the indices of the buckets to which each value in the input belongs, where the boundaries of the buckets are set by boundaries. |
|
| Do cartesian product of the given sequence of tensors. | |
| Computes batched the p-norm distance between each pair of the two collections of row vectors. | |
Returns a copy of input. |
|
| Compute combinations of length r of the given tensor. | |
Estimates the Pearson product-moment correlation coefficient matrix of the variables given by the input matrix, where rows are the variables and columns are the observations. |
|
Estimates the covariance matrix of the variables given by the input matrix, where rows are the variables and columns are the observations. |
|
Returns the cross product of vectors in dimension dim of input and other. |
|
Returns a namedtuple (values, indices) where values is the cumulative maximum of elements of input in the dimension dim. |
|
Returns a namedtuple (values, indices) where values is the cumulative minimum of elements of input in the dimension dim. |
|
Returns the cumulative product of elements of input in the dimension dim. |
|
Returns the cumulative sum of elements of input in the dimension dim. |
|
|
|
Creates a tensor whose diagonals of certain 2D planes (specified by dim1 and dim2) are filled by input. |
|
|
|
Returns a partial view of input with the its diagonal elements with respect to dim1 and dim2 appended as a dimension at the end of the shape. |
|
| Computes the n-th forward difference along the given dimension. | |
Sums the product of the elements of the input operands along dimensions specified using a notation based on the Einstein summation convention. |
|
Flattens input by reshaping it into a one-dimensional tensor. |
|
| Reverse the order of an n-D tensor along given axis in dims. | |
| Flip tensor in the left/right direction, returning a new tensor. | |
| Flip tensor in the up/down direction, returning a new tensor. | |
Computes the Kronecker product, denoted by ⊗, of input and other. |
|
| Rotate an n-D tensor by 90 degrees in the plane specified by dims axis. | |
Computes the element-wise greatest common divisor (GCD) of input and other. |
|
| Computes the histogram of a tensor. | |
| Computes a histogram of the values in a tensor. | |
| Computes a multi-dimensional histogram of the values in a tensor. | |
| Creates grids of coordinates specified by the 1D inputs in attr:tensors. | |
Computes the element-wise least common multiple (LCM) of input and other. |
|
Returns the logarithm of the cumulative summation of the exponentiation of elements of input in the dimension dim. |
|
| Return a contiguous flattened tensor. | |
Returns a tensor where each sub-tensor of input along dimension dim is normalized such that the p-norm of the sub-tensor is lower than the value maxnorm |
|
| Repeat elements of a tensor. | |
Roll the tensor input along the given dimension(s). |
|
Find the indices from the innermost dimension of sorted_sequence such that, if the corresponding values in values were inserted before the indices, when sorted, the order of the corresponding innermost dimension within sorted_sequence would be preserved. |
|
| Returns a contraction of a and b over multiple dimensions. | |
| Returns the sum of the elements of the diagonal of the input 2-D matrix. | |
Returns the lower triangular part of the matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0. |
|
Returns the indices of the lower triangular part of a row-by- col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. |
|
Returns the upper triangular part of a matrix (2-D tensor) or batch of matrices input, the other elements of the result tensor out are set to 0. |
|
Returns the indices of the upper triangular part of a row by col matrix in a 2-by-N Tensor, where the first row contains row coordinates of all indices and the second row contains column coordinates. |
|
| Expands a dimension of the input tensor over multiple dimensions. | |
| Generates a Vandermonde matrix. | |
Returns a view of input as a real tensor. |
|
Returns a view of input as a complex tensor. |
|
Returns a new tensor with materialized conjugation if input‘s conjugate bit is set to True, else returns input. |
|
Returns a new tensor with materialized negation if input‘s negative bit is set to True, else returns input. |
Performs a batch matrix-matrix product of matrices stored in batch1 and batch2, with a reduced add step (all matrix multiplications get accumulated along the first dimension). |
|
Performs a matrix multiplication of the matrices mat1 and mat2. |
|
Performs a matrix-vector product of the matrix mat and the vector vec. |
|
Performs the outer-product of vectors vec1 and vec2 and adds it to the matrix input. |
|
Performs a batch matrix-matrix product of matrices in batch1 and batch2. |
|
Performs a batch matrix-matrix product of matrices stored in input and mat2. |
|
| Returns the matrix product of the N 2-D tensors. | |
| Computes the Cholesky decomposition of a symmetric positive-definite matrix A or for batches of symmetric positive-definite matrices. | |
| Computes the inverse of a complex Hermitian or real symmetric positive-definite matrix given its Cholesky decomposition. | |
| Computes the solution of a system of linear equations with complex Hermitian or real symmetric positive-definite lhs given its Cholesky decomposition. | |
| Computes the dot product of two 1D tensors. | |
| This is a low-level function for calling LAPACK’s geqrf directly. | |
Alias of torch.outer(). |
|
| Computes the dot product for 1D tensors. | |
Alias for torch.linalg.inv() |
|
Alias for torch.linalg.det() |
|
| Calculates log determinant of a square matrix or batches of square matrices. | |
Alias for torch.linalg.slogdet() |
|
Computes the LU factorization of a matrix or batches of matrices A. |
|
Returns the LU solve of the linear system Ax=b using the partially pivoted LU factorization of A from lu_factor(). |
|
Unpacks the LU decomposition returned by lu_factor() into the P, L, U matrices. |
|
| Matrix product of two tensors. | |
Alias for torch.linalg.matrix_power() |
|
Alias for torch.linalg.matrix_exp(). |
|
Performs a matrix multiplication of the matrices input and mat2. |
|
Performs a matrix-vector product of the matrix input and the vector vec. |
|
Alias for torch.linalg.householder_product(). |
|
| Computes the matrix-matrix multiplication of a product of Householder matrices with a general matrix. | |
Outer product of input and vec2. |
|
Alias for torch.linalg.pinv() |
|
Computes the QR decomposition of a matrix or a batch of matrices input, and returns a namedtuple (Q, R) of tensors such that input=QR with Q being an orthogonal matrix or batch of orthogonal matrices and R being an upper triangular matrix or batch of upper triangular matrices. |
|
Computes the singular value decomposition of either a matrix or batch of matrices input. |
|
Return the singular value decomposition (U, S, V) of a matrix, batches of matrices, or a sparse matrix A such that A≈Udiag(S)VH. |
|
| Performs linear Principal Component Analysis (PCA) on a low-rank matrix, batches of such matrices, or sparse matrix. | |
| Find the k largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric positive definite generalized eigenvalue problem using matrix-free LOBPCG methods. | |
Alias for torch.trapezoid(). |
|
Computes the trapezoidal rule along dim. |
|
|
Cumulatively computes the trapezoidal rule along
|
|
| Solves a system of equations with a square upper or lower triangular invertible matrix A and multiple right-hand sides b. | |
| Computes the dot product of two 1D vectors along a dimension. |
Warning
This API is in beta and subject to future changes. Forward-mode AD is not supported.
Apply torch.abs() to each Tensor of the input list. |
|
Apply torch.abs() to each Tensor of the input list. |
|
Apply torch.acos() to each Tensor of the input list. |
|
Apply torch.acos() to each Tensor of the input list. |
|
Apply torch.asin() to each Tensor of the input list. |
|
Apply torch.asin() to each Tensor of the input list. |
|
Apply torch.atan() to each Tensor of the input list. |
|
Apply torch.atan() to each Tensor of the input list. |
|
Apply torch.ceil() to each Tensor of the input list. |
|
Apply torch.ceil() to each Tensor of the input list. |
|
Apply torch.cos() to each Tensor of the input list. |
|
Apply torch.cos() to each Tensor of the input list. |
|
Apply torch.cosh() to each Tensor of the input list. |
|
Apply torch.cosh() to each Tensor of the input list. |
|
Apply torch.erf() to each Tensor of the input list. |
|
Apply torch.erf() to each Tensor of the input list. |
|
Apply torch.erfc() to each Tensor of the input list. |
|
Apply torch.erfc() to each Tensor of the input list. |
|
Apply torch.exp() to each Tensor of the input list. |
|
Apply torch.exp() to each Tensor of the input list. |
|
Apply torch.expm1() to each Tensor of the input list. |
|
Apply torch.expm1() to each Tensor of the input list. |
|
Apply torch.floor() to each Tensor of the input list. |
|
Apply torch.floor() to each Tensor of the input list. |
|
Apply torch.log() to each Tensor of the input list. |
|
Apply torch.log() to each Tensor of the input list. |
|
Apply torch.log10() to each Tensor of the input list. |
|
Apply torch.log10() to each Tensor of the input list. |
|
Apply torch.log1p() to each Tensor of the input list. |
|
Apply torch.log1p() to each Tensor of the input list. |
|
Apply torch.log2() to each Tensor of the input list. |
|
Apply torch.log2() to each Tensor of the input list. |
|
Apply torch.neg() to each Tensor of the input list. |
|
Apply torch.neg() to each Tensor of the input list. |
|
Apply torch.tan() to each Tensor of the input list. |
|
Apply torch.tan() to each Tensor of the input list. |
|
Apply torch.sin() to each Tensor of the input list. |
|
Apply torch.sin() to each Tensor of the input list. |
|
Apply torch.sinh() to each Tensor of the input list. |
|
Apply torch.sinh() to each Tensor of the input list. |
|
Apply torch.round() to each Tensor of the input list. |
|
Apply torch.round() to each Tensor of the input list. |
|
Apply torch.sqrt() to each Tensor of the input list. |
|
Apply torch.sqrt() to each Tensor of the input list. |
|
Apply torch.lgamma() to each Tensor of the input list. |
|
Apply torch.lgamma() to each Tensor of the input list. |
|
Apply torch.frac() to each Tensor of the input list. |
|
Apply torch.frac() to each Tensor of the input list. |
|
Apply torch.reciprocal() to each Tensor of the input list. |
|
Apply torch.reciprocal() to each Tensor of the input list. |
|
Apply torch.sigmoid() to each Tensor of the input list. |
|
Apply torch.sigmoid() to each Tensor of the input list. |
|
Apply torch.trunc() to each Tensor of the input list. |
|
Apply torch.trunc() to each Tensor of the input list. |
|
Apply torch.zero() to each Tensor of the input list. |
| Returns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1 | |
Returns the torch.dtype that would result from performing an arithmetic operation on the provided input tensors. |
|
| Determines if a type conversion is allowed under PyTorch casting rules described in the type promotion documentation. | |
Returns the torch.dtype with the smallest size and scalar kind that is not smaller nor of lower kind than either type1 or type2. |
|
| Sets whether PyTorch operations must use “deterministic” algorithms. | |
| Returns True if the global deterministic flag is turned on. | |
| Returns True if the global deterministic flag is set to warn only. | |
| Sets the debug mode for deterministic operations. | |
| Returns the current value of the debug mode for deterministic operations. | |
| Sets the internal precision of float32 matrix multiplications. | |
| Returns the current value of float32 matrix multiplication precision. | |
| When this flag is False (default) then some PyTorch warnings may only appear once per process. | |
| Returns the module associated with a given device(e.g., torch.device(‘cuda’), “mtia:0”, “xpu”, …). | |
| Returns True if the global warn_always flag is turned on. | |
vmap is the vectorizing map; vmap(func) returns a new function that maps func over some dimension of the inputs. |
|
| A wrapper around Python’s assert which is symbolically traceable. |
Unlike regular bools, regular boolean operators will force extra guards instead of symbolically evaluate. Use the bitwise operators instead to handle this.
| SymInt-aware utility for float casting. | |
| SymInt-aware utility for int casting. | |
| SymInt-aware utility for max which avoids branching on a < b. | |
| SymInt-aware utility for min(). | |
| SymInt-aware utility for logical negation. | |
SymInt-aware utility for ternary operator (t if b else f.) |
|
| N-ary add which is faster to compute for long lists than iterated binary addition. |
Warning
This feature is a prototype and may have compatibility breaking changes in the future.
export generated/exportdb/index
Warning
This feature is a prototype and may have compatibility breaking changes in the future.
| Conditionally applies true_fn or false_fn. |
| Optimizes given model/function using TorchDynamo and specified backend. |