The sparse submodule is not loaded when we import PyTensor. You must
import pytensor.sparse to enable it.
The sparse module provides the same functionality as the tensor
module. The difference lies under the covers because sparse matrices
do not store data in a contiguous array. The sparse module has
been used in:
NLP: Dense linear transformations of sparse vectors.
This section tries to explain how information is stored for the two
sparse formats of SciPy supported by PyTensor.
PyTensor supports two compressed sparse formats: csc and csr,
respectively based on columns and rows. They have both the same
attributes: data, indices, indptr and shape.
The data attribute is a one-dimensional ndarray which
contains all the non-zero elements of the sparse matrix.
The indices and indptr attributes are used to store the
position of the data in the sparse matrix.
The shape attribute is exactly the same as the shape
attribute of a dense (i.e. generic) matrix. It can be explicitly
specified at the creation of a sparse matrix if it cannot be
inferred from the first three attributes.
In the Compressed Sparse Column format, indices stands for
indexes inside the column vectors of the matrix and indptr tells
where the column starts in the data and in the indices
attributes. indptr can be thought of as giving the slice which
must be applied to the other attribute in order to get each column of
the matrix. In other words, slice(indptr[i],indptr[i+1])
corresponds to the slice needed to find the i-th column of the matrix
in the data and indices fields.
The following example builds a matrix and returns its columns. It
prints the i-th column, i.e. a list of indices in the column and their
corresponding value in the second list.
In the Compressed Sparse Row format, indices stands for indexes
inside the row vectors of the matrix and indptr tells where the
row starts in the data and in the indices
attributes. indptr can be thought of as giving the slice which
must be applied to the other attribute in order to get each row of the
matrix. In other words, slice(indptr[i],indptr[i+1]) corresponds
to the slice needed to find the i-th row of the matrix in the data
and indices fields.
The following example builds a matrix and returns its rows. It prints
the i-th row, i.e. a list of indices in the row and their
corresponding value in the second list.
The first input is sparse, the second can be sparse or dense.
The grad implemented is regular.
No C code for perform and no C code for grad.
Returns a Sparse.
The gradient returns a Sparse for sparse inputs and by
default a dense for dense inputs. The parameter
grad_preserves_dense can be set to False to return a
sparse grad for dense inputs.
sampling_dot.
Both inputs must be dense.
The grad implemented is structured for p.
Sample of the dot and sample of the gradient.
C code for perform but not for grad.
Returns sparse for perform and grad.
usmm.
You shouldn’t insert this op yourself!
There is a rewrite that transforms a
dot to Usmm when possible.
This Op is the equivalent of gemm for sparse dot.
There is no grad implemented for this Op.
One of the inputs must be sparse, the other sparse or dense.
Returns a dense from perform.
Slice Operations
sparse_variable[N, N], returns a tensor scalar.
There is no grad implemented for this operation.
sparse_variable[M:N, O:P], returns a sparse matrix
There is no grad implemented for this operation.
Sparse variables don’t support [M, N:O] and [M:N, O] as we don’t
support sparse vectors and returning a sparse matrix would break
the numpy interface. Use [M:M+1, N:O] and [M:N, O:O+1] instead.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
CSM creates a matrix from data, indices, and indptr vectors; it’s gradient
is the gradient of the data vector only. There are two complexities to
calculate this gradient:
1. The gradient may be sparser than the input matrix defined by (data,
indices, indptr). In this case, the data vector of the gradient will have
less elements than the data vector of the input because sparse formats
remove 0s. Since we are only returning the gradient of the data vector,
the relevant 0s need to be added back.
2. The elements in the sparse dimension are not guaranteed to be sorted.
Therefore, the input data vector may have a different order than the
gradient data vector.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
The grad implemented is regular, i.e. not structured.
infer_shape method is not available for this Op.
We won’t implement infer_shape for this op now. This will
ask that we implement an GetNNZ op, and this op will keep
the dependence on the input of this op. So this won’t help
to remove computations in the graph. To remove computation,
we will need to make an infer_sparse_pattern feature to
remove computations. Doing this is trickier then the
infer_shape feature. For example, how do we handle the case
when some op create some 0 values? So there is dependence
on the values themselves. We could write an infer_shape for
the last output that is the shape, but I dough this will
get used.
We don’t return a view of the shape, we create a new ndarray from the shape
tuple.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
eval_points – A Variable or list of Variables with the same length as inputs.
Each element of eval_points specifies the value of the corresponding
input at the point where the R-operator is to be evaluated.
Return type:
rval[i] should be Rop(f=f_i(inputs),wrt=inputs,eval_points=eval_points).
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
The grad implementation can be controlled through the constructor via the
structured parameter. True will provide a structured grad while False
will provide a regular grad. By default, the grad is structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
CSR column indices are not necessarily sorted. Likewise
for CSC row indices. Use ensure_sorted_indices when sorted
indices are required (e.g. when passing data to other
libraries).
Notes
The grad implemented is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Implement a subtensor of sparse variable, returning a sparse matrix.
If you want to take only one element of a sparse matrix see
GetItemScalar that returns a tensor scalar.
Notes
Subtensor selection always returns a matrix, so indexing with [a:b, c:d]
is forced. If one index is a scalar, for instance, x[a:b, c] or x[a, b:c],
an error will be raised. Use instead x[a:b, c:c+1] or x[a:a+1, b:c].
The above indexing methods are not supported because the return value
would be a sparse matrix rather than a sparse vector, which is a
deviation from numpy indexing rule. This decision is made largely
to preserve consistency between numpy and pytensor. This may be revised
when sparse vectors are supported.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Compute the dot product dot(x,y.T)=z for only a subset of z.
This is equivalent to p*(x.y.T) where * is the element-wise
product, x and y operands of the dot product and p is a matrix that
contains 1 when the corresponding element of z should be calculated
and 0 when it shouldn’t. Note that SamplingDot has a different interface
than dot because it requires x to be a mxk matrix while
y is a nxk matrix instead of the usual kxn matrix.
Notes
It will work if the pattern is not binary value, but if the
pattern doesn’t have a high sparsity proportion it will be slower
then a more optimized dot followed by a normal elemwise
multiplication.
The grad implemented is regular, i.e. not structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
WARNING: judgement call…
We are not using the structured in the comparison or hashing
because it doesn’t change the perform method therefore, we
do want Sums with different structured values to be merged
by the merge optimization and this requires them to compare equal.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Structured addition of a sparse matrix and a dense vector.
The elements of the vector are only added to the corresponding
non-zero elements of the sparse matrix. Therefore, this operation
outputs another sparse matrix.
Notes
The grad implemented is structured since the op is structured.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Returns C code that does the computation associated to this Op,
given names for the inputs and outputs.
Parameters:
node (Apply instance) – The node for which we are compiling the current C code.
The same Op may be used in more than one node.
name (str) – A name that is automatically assigned and guaranteed to be
unique.
inputs (list of strings) – There is a string for each input of the function, and the
string is the name of a C variable pointing to that input.
The type of the variable depends on the declared type of
the input. There is a corresponding python variable that
can be accessed by prepending "py_" to the name in the
list.
outputs (list of strings) – Each string is the name of a C variable where the Op should
store its output. The type depends on the declared type of
the output. There is a corresponding Python variable that
can be accessed by prepending "py_" to the name in the
list. In some cases the outputs will be preallocated and
the value of the variable may be pre-filled. The value for
an unallocated output is type-dependent.
sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as 'fail').
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Returns C code that does the computation associated to this Op,
given names for the inputs and outputs.
Parameters:
node (Apply instance) – The node for which we are compiling the current C code.
The same Op may be used in more than one node.
name (str) – A name that is automatically assigned and guaranteed to be
unique.
inputs (list of strings) – There is a string for each input of the function, and the
string is the name of a C variable pointing to that input.
The type of the variable depends on the declared type of
the input. There is a corresponding python variable that
can be accessed by prepending "py_" to the name in the
list.
outputs (list of strings) – Each string is the name of a C variable where the Op should
store its output. The type depends on the declared type of
the output. There is a corresponding Python variable that
can be accessed by prepending "py_" to the name in the
list. In some cases the outputs will be preallocated and
the value of the variable may be pre-filled. The value for
an unallocated output is type-dependent.
sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as 'fail').
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that
input computed based on the symbolic gradients with respect to each
output. If the output is not differentiable with respect to an input,
then this method should return an instance of type NullType for that
input.
Using the reverse-mode AD characterization given in [1]_, for a
\(C = f(A, B)\) representing the function implemented by the Op
and its two arguments \(A\) and \(B\), given by the
Variables in inputs, the values returned by Op.grad represent
the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and
\(\bar{B}\), for some scalar output term \(S_O\) of \(C\)
in
Calculate the function on the inputs and put the variables in the output storage.
Parameters:
node – The symbolic Apply node that represents this computation.
inputs – Immutable sequence of non-symbolic/numeric inputs. These
are the values of each Variable in node.inputs.
output_storage – List of mutable single-element lists (do not change the length of
these lists). Each sub-list corresponds to value of each
Variable in node.outputs. The primary purpose of this method
is to set the values of these sub-lists.
Notes
The output_storage list might contain data. If an element of
output_storage is not None, it has to be of the right type, for
instance, for a TensorVariable, it has to be a NumPy ndarray
with the right number of dimensions and the correct dtype.
Its shape and stride pattern can be arbitrary. It is not
guaranteed that such pre-set values were produced by a previous call to
this Op.perform(); they could’ve been allocated by another
Op’s perform method.
An Op is free to reuse output_storage as it sees fit, or to
discard it and allocate new memory.
Remove explicit zeros from a sparse matrix, and re-sort indices.
CSR column indices are not necessarily sorted. Likewise
for CSC row indices. Use clean when sorted
indices are required (e.g. when passing data to other
libraries) and to ensure there are no zeros in the data.
Parameters:
x – A sparse matrix.
Returns:
The same as x with indices sorted and zeros
removed.
Return type:
A sparse matrix
Notes
The grad implemented is regular, i.e. not structured.
Efficiently compute the dot product when one or all operands are sparse.
Supported formats are CSC and CSR. The output of the operation is dense.
Parameters:
x – Sparse or dense matrix variable.
y – Sparse or dense matrix variable.
Return type:
The dot product x@y in a dense format.
Notes
The grad implemented is regular, i.e. not structured.
At least one of x or y must be a sparse matrix.
When the operation has the form dot(csr_matrix,dense)
the gradient of this operation can be performed inplace
by UsmmCscDense. This leads to significant speed-ups.
Calculate the sum of a sparse matrix along the specified axis.
It operates a reduction along the specified axis. When axis is None,
it is applied along all axes.
Parameters:
x – Sparse matrix.
axis – Axis along which the sum is applied. Integer or None.
sparse_grad (bool) – True to have a structured grad.
Returns:
The sum of x in a dense format.
Return type:
object
Notes
The grad implementation is controlled with the sparse_grad parameter.
True will provide a structured grad and False will provide a regular
grad. For both choices, the grad returns a sparse matrix having the same
format as x.
This op does not return a sparse matrix, but a dense tensor matrix.
Subtract two matrices, at least one of which is sparse.
This method will provide the right op according
to the inputs.
Parameters:
x – A matrix variable.
y – A matrix variable.
Returns:
x - y
Return type:
A sparse matrix
Notes
At least one of x and y must be a sparse matrix.
The grad will be structured only when one of the variable will be a dense
matrix.
pytensor.sparse.basic.true_dot(x, y, grad_preserves_dense=True)[source]#
Operation for efficiently calculating the dot product when
one or all operands are sparse. Supported formats are CSC and CSR.
The output of the operation is sparse.
Parameters:
x – Sparse matrix.
y – Sparse matrix or 2d tensor variable.
grad_preserves_dense (bool) – If True (default), makes the grad of dense inputs dense.
Otherwise the grad is always sparse.
Returns:
The dot product x.`y` in a sparse format.
Notex
—–
The grad implemented is regular, i.e. not structured.