tensor.elemwise
– Tensor Elemwise#
- class pytensor.tensor.elemwise.CAReduce(scalar_op, axis=None, dtype=None, acc_dtype=None, upcast_discrete_output=False)[source]#
Reduces a scalar operation along specified axes.
The scalar op should be both commutative and associative.
CAReduce
= Commutative Associative Reduce.The output will have the same shape as the input minus the reduced dimensions. It will contain the variable of accumulating all values over the reduced dimensions using the specified scalar
Op
.Notes
CAReduce(add) # sum (ie, acts like the numpy sum operation) CAReduce(mul) # product CAReduce(maximum) # max CAReduce(minimum) # min CAReduce(or_) # any # not lazy CAReduce(and_) # all # not lazy CAReduce(xor) # a bit at 1 tell that there was an odd number of # bit at that position that where 1. 0 it was an # even number ...
In order to (eventually) optimize memory usage patterns,
CAReduce
makes zero guarantees on the order in which it iterates over the dimensions and the elements of the array(s). Therefore, to ensure consistent variables, the scalar operation represented by the reduction must be both commutative and associative (eg add, multiply, maximum, binary or/and/xor - but not subtract, divide or power).- c_code(node, name, inames, onames, sub)[source]#
Return the C implementation of an
Op
.Returns C code that does the computation associated to this
Op
, given names for the inputs and outputs.- Parameters:
node (Apply instance) – The node for which we are compiling the current C code. The same
Op
may be used in more than one node.name (str) – A name that is automatically assigned and guaranteed to be unique.
inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending
"py_"
to the name in the list.outputs (list of strings) – Each string is the name of a C variable where the
Op
should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending"py_"
to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.sub (dict of strings) – Extra symbols defined in
CLinker
sub symbols (such as'fail'
).
- c_code_cache_version_apply(node)[source]#
Return a tuple of integers indicating the version of this
Op
.An empty tuple indicates an “unversioned”
Op
that will not be cached between processes.The cache mechanism may erase cached modules that have been superseded by newer versions. See
ModuleCache
for details.See also
Notes
This function overrides
c_code_cache_version
unless it explicitly callsc_code_cache_version
. The default implementation simply callsc_code_cache_version
and ignores thenode
argument.
- c_headers(**kwargs)[source]#
Return a list of header files required by code returned by this class.
These strings will be prefixed with
#include
and inserted at the beginning of the C source code.Strings in this list that start neither with
<
nor"
will be enclosed in double-quotes.Examples
def c_headers(self, **kwargs): return ["<iostream>", "<math.h>", "/full/path/to/header.h"]
- make_node(input)[source]#
Construct an
Apply
node that represent the application of this operation to the given inputs.This must be implemented by sub-classes.
- Returns:
node – The constructed
Apply
node.- Return type:
- perform(node, inp, out)[source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Apply
node that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storage
list might contain data. If an element of output_storage is notNone
, it has to be of the right type, for instance, for aTensorVariable
, it has to be a NumPyndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform()
; they could’ve been allocated by anotherOp
’sperform
method. AnOp
is free to reuseoutput_storage
as it sees fit, or to discard it and allocate new memory.
- class pytensor.tensor.elemwise.DimShuffle(input_broadcastable, new_order)[source]#
Allows to reorder the dimensions of a tensor or insert or remove broadcastable dimensions.
In the following examples, ‘x’ means that we insert a broadcastable dimension and a numerical index represents the dimension of the same rank in the tensor passed to perform.
- Parameters:
input_broadcastable – The expected broadcastable pattern of the input
new_order – A list representing the relationship between the input’s dimensions and the output’s dimensions. Each element of the list can either be an index or ‘x’. Indices must be encoded as python integers, not pytensor symbolic integers.
inplace (bool, optional) – If True (default), the output will be a view of the input.
Notes
If
j = new_order[i]
is an index, the output’s ith dimension will be the input’s jth dimension. Ifnew_order[i]
isx
, the output’s ith dimension will be 1 and broadcast operations will be allowed to do broadcasting over that dimension.If
input.type.shape[i] != 1
theni
must be found innew_order
. Broadcastable dimensions, on the other hand, can be discarded.DimShuffle((False, False, False), ["x", 2, "x", 0, 1])
This
Op
will only work on 3d tensors with no broadcastable dimensions. The first dimension will be broadcastable, then we will have the third dimension of the input tensor as the second of the resulting tensor, etc. If the tensor has shape (20, 30, 40), the resulting tensor will have dimensions (1, 40, 1, 20, 30). (AxBxC tensor is mapped to 1xCx1xAxB tensor)DimShuffle((True, False), [1])
This
Op
will only work on 2d tensors with the first dimension broadcastable. The second dimension of the input tensor will be the first dimension of the resulting tensor. If the tensor has shape (1, 20), the resulting tensor will have shape (20, ).Examples
DimShuffle((), ["x"]) # make a 0d (scalar) into a 1d vector DimShuffle((False, False), [0, 1]) # identity DimShuffle((False, False), [1, 0]) # inverts the 1st and 2nd dimensions DimShuffle((False,), ["x", 0]) # make a row out of a 1d vector # (N to 1xN) DimShuffle((False,), [0, "x"]) # make a column out of a 1d vector # (N to Nx1) DimShuffle((False, False, False), [2, 0, 1]) # AxBxC to CxAxB DimShuffle((False, False), [0, "x", 1]) # AxB to Ax1xB DimShuffle((False, False), [1, "x", 0]) # AxB to Bx1xA
The reordering of the dimensions can be done with the numpy.transpose function. Adding, subtracting dimensions can be done with reshape.
- R_op(inputs, eval_points)[source]#
Construct a graph for the R-operator.
This method is primarily used by
Rop
.- Parameters:
inputs – The
Op
inputs.eval_points – A
Variable
or list ofVariable
s with the same length as inputs. Each element ofeval_points
specifies the value of the corresponding input at the point where the R-operator is to be evaluated.
- Return type:
rval[i]
should beRop(f=f_i(inputs), wrt=inputs, eval_points=eval_points)
.
- grad(inp, grads)[source]#
Construct a graph for the gradient with respect to each input variable.
Each returned
Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of typeNullType
for that input.Using the reverse-mode AD characterization given in [1], for a \(C = f(A, B)\) representing the function implemented by the
Op
and its two arguments \(A\) and \(B\), given by theVariable
s ininputs
, the values returned byOp.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and \(\bar{B}\), for some scalar output term \(S_O\) of \(C\) in\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]- Parameters:
inputs – The input variables.
output_grads – The gradients of the output variables.
- Returns:
The gradients with respect to each
Variable
ininputs
.- Return type:
grads
References
- make_node(_input)[source]#
Construct an
Apply
node that represent the application of this operation to the given inputs.This must be implemented by sub-classes.
- Returns:
node – The constructed
Apply
node.- Return type:
- perform(node, inp, out)[source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Apply
node that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storage
list might contain data. If an element of output_storage is notNone
, it has to be of the right type, for instance, for aTensorVariable
, it has to be a NumPyndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform()
; they could’ve been allocated by anotherOp
’sperform
method. AnOp
is free to reuseoutput_storage
as it sees fit, or to discard it and allocate new memory.
- class pytensor.tensor.elemwise.Elemwise(scalar_op, inplace_pattern=None, name=None, nfunc_spec=None, openmp=None)[source]#
Generalizes a scalar
Op
to tensors.All the inputs must have the same number of dimensions. When the
Op
is performed, for each dimension, each input’s size for that dimension must be the same. As a special case, it can also be one but only if the input’sbroadcastable
flag isTrue
for that dimension. In that case, the tensor is (virtually) replicated along that dimension to match the size of the others.The dtypes of the outputs mirror those of the scalar
Op
that is being generalized to tensors. In particular, if the calculations for an output are done in-place on an input, the output type must be the same as the corresponding input type (see the doc ofScalarOp
to get help about controlling the output type)Notes
-
Elemwise(add)
: represents+
on tensorsx + y
-Elemwise(add, {0 : 0})
: represents the+=
operationx += y
-Elemwise(add, {0 : 1})
: represents+=
on the second argumenty += x
-Elemwise(mul)(np.random.random((10, 5)), np.random.random((1, 5)))
: the second input is completed along the first dimension to match the first input -Elemwise(true_div)(np.random.random(10, 5), np.random.random(10, 1))
: same but along the second dimension -Elemwise(int_div)(np.random.random((1, 5)), np.random.random((10, 1)))
: the output has size(10, 5)
. -Elemwise(log)(np.random.random((3, 4, 5)))
- L_op(inputs, outs, ograds)[source]#
Construct a graph for the L-operator.
The L-operator computes a row vector times the Jacobian.
This method dispatches to
Op.grad()
by default. In one sense, this method provides the original outputs when they’re needed to compute the return value, whereasOp.grad
doesn’t.See
Op.grad
for a mathematical explanation of the inputs and outputs of this method.- Parameters:
inputs – The inputs of the
Apply
node using thisOp
.outputs – The outputs of the
Apply
node using thisOp
output_grads – The gradients with respect to each
Variable
ininputs
.
- R_op(inputs, eval_points)[source]#
Construct a graph for the R-operator.
This method is primarily used by
Rop
.- Parameters:
inputs – The
Op
inputs.eval_points – A
Variable
or list ofVariable
s with the same length as inputs. Each element ofeval_points
specifies the value of the corresponding input at the point where the R-operator is to be evaluated.
- Return type:
rval[i]
should beRop(f=f_i(inputs), wrt=inputs, eval_points=eval_points)
.
- c_code(node, nodename, inames, onames, sub)[source]#
Return the C implementation of an
Op
.Returns C code that does the computation associated to this
Op
, given names for the inputs and outputs.- Parameters:
node (Apply instance) – The node for which we are compiling the current C code. The same
Op
may be used in more than one node.name (str) – A name that is automatically assigned and guaranteed to be unique.
inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending
"py_"
to the name in the list.outputs (list of strings) – Each string is the name of a C variable where the
Op
should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending"py_"
to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.sub (dict of strings) – Extra symbols defined in
CLinker
sub symbols (such as'fail'
).
- c_code_cache_version_apply(node)[source]#
Return a tuple of integers indicating the version of this
Op
.An empty tuple indicates an “unversioned”
Op
that will not be cached between processes.The cache mechanism may erase cached modules that have been superseded by newer versions. See
ModuleCache
for details.See also
Notes
This function overrides
c_code_cache_version
unless it explicitly callsc_code_cache_version
. The default implementation simply callsc_code_cache_version
and ignores thenode
argument.
- c_header_dirs(**kwargs)[source]#
Return a list of header search paths required by code returned by this class.
Provides search paths for headers, in addition to those in any relevant environment variables.
Note
For Unix compilers, these are the things that get
-I
prefixed in the compiler command line arguments.Examples
def c_header_dirs(self, **kwargs): return ["/usr/local/include", "/opt/weirdpath/src/include"]
- c_support_code(**kwargs)[source]#
Return utility code for use by a
Variable
orOp
.This is included at global scope prior to the rest of the code for this class.
Question: How many times will this support code be emitted for a graph with many instances of the same type?
- Return type:
str
- c_support_code_apply(node, nodename)[source]#
Return
Apply
-specialized utility code for use by anOp
that will be inserted at global scope.- Parameters:
node (Apply) – The node in the graph being compiled.
name (str) – A string or number that serves to uniquely identify this node. Symbol names defined by this support code should include the name, so that they can be called from the
CLinkerOp.c_code()
, and so that they do not cause name collisions.
Notes
This function is called in addition to
CLinkerObject.c_support_code()
and will supplement whatever is returned from there.
- get_output_info(dim_shuffle, *inputs)[source]#
Return the outputs dtype and broadcastable pattern and the dimshuffled inputs.
- make_node(*inputs)[source]#
If the inputs have different number of dimensions, their shape is left-completed to the greatest number of dimensions with 1s using DimShuffle.
- perform(node, inputs, output_storage)[source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Apply
node that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storage
list might contain data. If an element of output_storage is notNone
, it has to be of the right type, for instance, for aTensorVariable
, it has to be a NumPyndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform()
; they could’ve been allocated by anotherOp
’sperform
method. AnOp
is free to reuseoutput_storage
as it sees fit, or to discard it and allocate new memory.
- prepare_node(node, storage_map, compute_map, impl)[source]#
Make any special modifications that the
Op
needs before doingOp.make_thunk()
.This can modify the node inplace and should return nothing.
It can be called multiple time with different
impl
values.Warning
It is the
Op
’s responsibility to not re-prepare the node when it isn’t good to do so.
- pytensor.tensor.elemwise.get_normalized_batch_axes(core_axes: None | int | tuple[int, ...], core_ndim: int, batch_ndim: int) tuple[int, ...] [source]#
Compute batch axes for a batched operation, from the core input ndim and axes.
e.g., sum(matrix, axis=None) -> sum(tensor4, axis=(2, 3)) batch_axes(None, 2, 4) -> (2, 3)
e.g., sum(matrix, axis=0) -> sum(tensor4, axis=(2,)) batch_axes(0, 2, 4) -> (2,)
e.g., sum(tensor3, axis=(0, -1)) -> sum(tensor4, axis=(1, 3)) batch_axes((0, -1), 3, 4) -> (1, 3)
- pytensor.tensor.elemwise.scalar_elemwise(*symbol, nfunc=None, nin=None, nout=None, symbolname=None)[source]#
Replace a symbol definition with an
Elemwise
-wrapped version of the corresponding scalarOp
.If it is not
None
, thenfunc
argument should be a string such thatgetattr(numpy, nfunc)
implements a vectorized version of theElemwise
operation.nin
is the number of inputs expected by that function, and nout is the number of destination inputs it takes. That is, the function should take nin + nout inputs.nout == 0
means that the numpy function does not take a NumPy array argument to put its result in.