tensor.nlinalg
– Linear Algebra Ops Using Numpy#
Note
This module is not imported by default. You need to import it to use it.
API#
- pytensor.tensor.nlinalg.kron(a, b)[source]#
Kronecker product.
Same as np.kron(a, b)
- Parameters:
a (array_like) –
b (array_like) –
- Return type:
array_like with a.ndim + b.ndim - 2 dimensions
- pytensor.tensor.nlinalg.matrix_dot(*args)[source]#
Shorthand for product between several dots.
Given \(N\) matrices \(A_0, A_1, .., A_N\),
matrix_dot
will generate the matrix product between all in the given order, namely \(A_0 \cdot A_1 \cdot A_2 \cdot .. \cdot A_N\).
- pytensor.tensor.nlinalg.matrix_power(M, n)[source]#
Raise a square matrix,
M
, to the (integer) powern
.This implementation uses exponentiation by squaring which is significantly faster than the naive implementation. The time complexity for exponentiation by squaring is :math:
mathcal{O}((n log M)^k)
- Parameters:
M (TensorVariable) –
n (int) –
- pytensor.tensor.nlinalg.norm(x: TensorVariable, ord: Literal['fro', 'f', 'nuc', 'inf', '-inf', 0, 1, -1, 2, -2]]] = None, axis: Optional[Union[int, tuple[int, ...]]] = None, keepdims: bool = False)[source]#
Matrix or vector norm.
- Parameters:
x (TensorVariable) – Tensor to take norm of.
ord (float, str or int, optional) –
- Order of norm. If
ord
is a str, it must be one of the following: ’fro’ or ‘f’ : Frobenius norm
’nuc’ : nuclear norm
’inf’ : Infinity norm
’-inf’ : Negative infinity norm
If an integer, order can be one of -2, -1, 0, 1, or 2. Otherwise
ord
must be a float.Default is the Frobenius (L2) norm.
- Order of norm. If
axis (tuple of int, optional) – Axes over which to compute the norm. If None, norm of entire matrix (or vector) is computed. Row or column norms can be computed by passing a single integer; this will treat a matrix like a batch of vectors.
keepdims (bool) – If True, dummy axes will be inserted into the output so that norm.dnim == x.dnim. Default is False.
- Returns:
Norm of
x
along axes specified byaxis
.- Return type:
Notes
Batched dimensions are supported to the left of the core dimensions. For example, if
x
is a 3D tensor with shape (2, 3, 4), thennorm(x)
will compute the norm of each 3x4 matrix in the batch.If the input is a 2D tensor and should be treated as a batch of vectors, the
axis
argument must be specified.
- pytensor.tensor.nlinalg.pinv(x, hermitian=False)[source]#
Computes the pseudo-inverse of a matrix \(A\).
The pseudo-inverse of a matrix \(A\), denoted \(A^+\), is defined as: “the matrix that ‘solves’ [the least-squares problem] \(Ax = b\),” i.e., if \(\bar{x}\) is said solution, then \(A^+\) is that matrix such that \(\bar{x} = A^+b\).
Note that \(Ax=AA^+b\), so \(AA^+\) is close to the identity matrix. This method is not faster than
matrix_inverse
. Its strength comes from that it works for non-square matrices. If you have a square matrix though,matrix_inverse
can be both more exact and faster to compute. Also this op does not get optimized into a solve op.
- pytensor.tensor.nlinalg.qr(a, mode='reduced')[source]#
Computes the QR decomposition of a matrix. Factor the matrix a as qr, where q is orthonormal and r is upper-triangular.
- Parameters:
a (array_like, shape (M, N)) – Matrix to be factored.
mode ({'reduced', 'complete', 'r', 'raw'}, optional) –
If K = min(M, N), then
- ’reduced’
returns q, r with dimensions (M, K), (K, N)
- ’complete’
returns q, r with dimensions (M, M), (M, N)
- ’r’
returns r only with dimensions (K, N)
- ’raw’
returns h, tau with dimensions (N, M), (K,)
Note that array h returned in ‘raw’ mode is transposed for calling Fortran.
Default mode is ‘reduced’
- Returns:
q (matrix of float or complex, optional) – A matrix with orthonormal columns. When mode = ‘complete’ the result is an orthogonal/unitary matrix depending on whether or not a is real/complex. The determinant may be either +/- 1 in that case.
r (matrix of float or complex, optional) – The upper-triangular matrix.
- pytensor.tensor.nlinalg.slogdet(x: Union[Variable, Sequence[Variable], ArrayLike]) tuple[pytensor.tensor.variable.TensorVariable, pytensor.tensor.variable.TensorVariable] [source]#
Compute the sign and (natural) logarithm of the determinant of an array.
Returns a naive graph which is optimized later using rewrites with the det operation.
- Parameters:
x ((..., M, M) tensor or tensor_like) – Input tensor, has to be square.
- Returns:
A tuple with the following attributes
sign ((…) tensor_like) – A number representing the sign of the determinant. For a real matrix, this is 1, 0, or -1.
logabsdet ((…) tensor_like) – The natural log of the absolute value of the determinant.
If the determinant is zero, then
sign
will be 0 andlogabsdet
will be -inf. In all cases, the determinant is equal to
sign * exp(logabsdet)
.
- pytensor.tensor.nlinalg.svd(a, full_matrices: bool = True, compute_uv: bool = True)[source]#
This function performs the SVD on CPU.
- Parameters:
full_matrices (bool, optional) – If True (default), u and v have the shapes (M, M) and (N, N), respectively. Otherwise, the shapes are (M, K) and (K, N), respectively, where K = min(M, N).
compute_uv (bool, optional) – Whether or not to compute u and v in addition to s. True by default.
- Returns:
U, V, D
- Return type:
matrices
- pytensor.tensor.nlinalg.tensorinv(a, ind=2)[source]#
PyTensor utilization of numpy.linalg.tensorinv;
Compute the ‘inverse’ of an N-dimensional array. The result is an inverse for
a
relative to the tensordot operationtensordot(a, b, ind)
, i. e., up to floating-point accuracy,tensordot(tensorinv(a), a, ind)
is the “identity” tensor for the tensordot operation.- Parameters:
a (array_like) – Tensor to ‘invert’. Its shape must be ‘square’, i. e.,
prod(a.shape[:ind]) == prod(a.shape[ind:])
.ind (int, optional) – Number of first indices that are involved in the inverse sum. Must be a positive integer, default is 2.
- Returns:
b –
a
’s tensordot inverse, shapea.shape[ind:] + a.shape[:ind]
.- Return type:
ndarray
- Raises:
LinAlgError – If
a
is singular or not ‘square’ (in the above sense).
- pytensor.tensor.nlinalg.tensorsolve(a, b, axes=None)[source]#
PyTensor utilization of numpy.linalg.tensorsolve.
Solve the tensor equation
a x = b
for x. It is assumed that all indices ofx
are summed over in the product, together with the rightmost indices ofa
, as is done in, for example,tensordot(a, x, axes=len(b.shape))
.- Parameters:
a (array_like) – Coefficient tensor, of shape
b.shape + Q
.Q
, a tuple, equals the shape of that sub-tensor ofa
consisting of the appropriate number of its rightmost indices, and must be such thatprod(Q) == prod(b.shape)
(in which sensea
is said to be ‘square’).b (array_like) – Right-hand tensor, which can be of any shape.
axes (tuple of ints, optional) – Axes in
a
to reorder to the right, before inversion. If None (default), no reordering is done.
- Returns:
x
- Return type:
ndarray, shape Q
- Raises:
LinAlgError – If
a
is singular or not ‘square’ (in the above sense).