tensor.nlinalg – Linear Algebra Ops Using Numpy#


This module is not imported by default. You need to import it to use it.


pytensor.tensor.nlinalg.kron(a, b)[source]#

Kronecker product.

Same as np.kron(a, b)

  • a (array_like) –

  • b (array_like) –

Return type:

array_like with a.ndim + b.ndim - 2 dimensions


Shorthand for product between several dots.

Given \(N\) matrices \(A_0, A_1, .., A_N\), matrix_dot will generate the matrix product between all in the given order, namely \(A_0 \cdot A_1 \cdot A_2 \cdot .. \cdot A_N\).

pytensor.tensor.nlinalg.matrix_power(M, n)[source]#

Raise a square matrix, M, to the (integer) power n.

This implementation uses exponentiation by squaring which is significantly faster than the naive implementation. The time complexity for exponentiation by squaring is :math: mathcal{O}((n log M)^k)

pytensor.tensor.nlinalg.norm(x: TensorVariable, ord: Literal['fro', 'f', 'nuc', 'inf', '-inf', 0, 1, -1, 2, -2]]] = None, axis: Optional[Union[int, tuple[int, ...]]] = None, keepdims: bool = False)[source]#

Matrix or vector norm.

  • x (TensorVariable) – Tensor to take norm of.

  • ord (float, str or int, optional) –

    Order of norm. If ord is a str, it must be one of the following:
    • ’fro’ or ‘f’ : Frobenius norm

    • ’nuc’ : nuclear norm

    • ’inf’ : Infinity norm

    • ’-inf’ : Negative infinity norm

    If an integer, order can be one of -2, -1, 0, 1, or 2. Otherwise ord must be a float.

    Default is the Frobenius (L2) norm.

  • axis (tuple of int, optional) – Axes over which to compute the norm. If None, norm of entire matrix (or vector) is computed. Row or column norms can be computed by passing a single integer; this will treat a matrix like a batch of vectors.

  • keepdims (bool) – If True, dummy axes will be inserted into the output so that norm.dnim == x.dnim. Default is False.


Norm of x along axes specified by axis.

Return type:



Batched dimensions are supported to the left of the core dimensions. For example, if x is a 3D tensor with shape (2, 3, 4), then norm(x) will compute the norm of each 3x4 matrix in the batch.

If the input is a 2D tensor and should be treated as a batch of vectors, the axis argument must be specified.

pytensor.tensor.nlinalg.pinv(x, hermitian=False)[source]#

Computes the pseudo-inverse of a matrix \(A\).

The pseudo-inverse of a matrix \(A\), denoted \(A^+\), is defined as: “the matrix that ‘solves’ [the least-squares problem] \(Ax = b\),” i.e., if \(\bar{x}\) is said solution, then \(A^+\) is that matrix such that \(\bar{x} = A^+b\).

Note that \(Ax=AA^+b\), so \(AA^+\) is close to the identity matrix. This method is not faster than matrix_inverse. Its strength comes from that it works for non-square matrices. If you have a square matrix though, matrix_inverse can be both more exact and faster to compute. Also this op does not get optimized into a solve op.

pytensor.tensor.nlinalg.qr(a, mode='reduced')[source]#

Computes the QR decomposition of a matrix. Factor the matrix a as qr, where q is orthonormal and r is upper-triangular.

  • a (array_like, shape (M, N)) – Matrix to be factored.

  • mode ({'reduced', 'complete', 'r', 'raw'}, optional) –

    If K = min(M, N), then


    returns q, r with dimensions (M, K), (K, N)


    returns q, r with dimensions (M, M), (M, N)


    returns r only with dimensions (K, N)


    returns h, tau with dimensions (N, M), (K,)

    Note that array h returned in ‘raw’ mode is transposed for calling Fortran.

    Default mode is ‘reduced’


  • q (matrix of float or complex, optional) – A matrix with orthonormal columns. When mode = ‘complete’ the result is an orthogonal/unitary matrix depending on whether or not a is real/complex. The determinant may be either +/- 1 in that case.

  • r (matrix of float or complex, optional) – The upper-triangular matrix.

pytensor.tensor.nlinalg.svd(a, full_matrices: bool = True, compute_uv: bool = True)[source]#

This function performs the SVD on CPU.

  • full_matrices (bool, optional) – If True (default), u and v have the shapes (M, M) and (N, N), respectively. Otherwise, the shapes are (M, K) and (K, N), respectively, where K = min(M, N).

  • compute_uv (bool, optional) – Whether or not to compute u and v in addition to s. True by default.


U, V, D

Return type:


pytensor.tensor.nlinalg.tensorinv(a, ind=2)[source]#

PyTensor utilization of numpy.linalg.tensorinv;

Compute the ‘inverse’ of an N-dimensional array. The result is an inverse for a relative to the tensordot operation tensordot(a, b, ind), i. e., up to floating-point accuracy, tensordot(tensorinv(a), a, ind) is the “identity” tensor for the tensordot operation.

  • a (array_like) – Tensor to ‘invert’. Its shape must be ‘square’, i. e., prod(a.shape[:ind]) == prod(a.shape[ind:]).

  • ind (int, optional) – Number of first indices that are involved in the inverse sum. Must be a positive integer, default is 2.


ba’s tensordot inverse, shape a.shape[ind:] + a.shape[:ind].

Return type:



LinAlgError – If a is singular or not ‘square’ (in the above sense).

pytensor.tensor.nlinalg.tensorsolve(a, b, axes=None)[source]#

PyTensor utilization of numpy.linalg.tensorsolve.

Solve the tensor equation a x = b for x. It is assumed that all indices of x are summed over in the product, together with the rightmost indices of a, as is done in, for example, tensordot(a, x, axes=len(b.shape)).

  • a (array_like) – Coefficient tensor, of shape b.shape + Q. Q, a tuple, equals the shape of that sub-tensor of a consisting of the appropriate number of its rightmost indices, and must be such that prod(Q) == prod(b.shape) (in which sense a is said to be ‘square’).

  • b (array_like) – Right-hand tensor, which can be of any shape.

  • axes (tuple of ints, optional) – Axes in a to reorder to the right, before inversion. If None (default), no reordering is done.



Return type:

ndarray, shape Q


LinAlgError – If a is singular or not ‘square’ (in the above sense).


Returns the sum of diagonal elements of matrix X.