ops – Some Common Ops and extra Ops stuff#

This file contains auxiliary Ops, used during the compilation phase and Ops building class (FromFunctionOp) and decorator (as_op()) that help make new Ops more rapidly.

class pytensor.compile.ops.DeepCopyOp[source]#
c_code(node, name, inames, onames, sub)[source]#

Return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters:
  • node (Apply instance) – The node for which we are compiling the current C code. The same Op may be used in more than one node.

  • name (str) – A name that is automatically assigned and guaranteed to be unique.

  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending "py_" to the name in the list.

  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending "py_" to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.

  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as 'fail').

c_code_cache_version()[source]#

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an “unversioned” Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superseded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply

make_node(x)[source]#

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:

node – The constructed Apply node.

Return type:

Apply

perform(node, args, outs)[source]#

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.

  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class pytensor.compile.ops.FromFunctionOp(fn, itypes, otypes, infer_shape)[source]#

Build a basic PyTensor Op around a function.

Since the resulting Op is very basic and is missing most of the optional functionalities, some optimizations may not apply. If you want to help, you can supply an infer_shape function that computes the shapes of the output given the shapes of the inputs.

Also the gradient is undefined in the resulting op and PyTensor will raise an error if you attempt to get the gradient of a graph containing this op.

perform(node, inputs, outputs)[source]#

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.

  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

class pytensor.compile.ops.OutputGuard[source]#

This op is used only internally by PyTensor.

Only the AddDestroyHandler optimizer tries to insert them in the graph.

This Op is declared as destructive while it is not destroying anything. It returns a view. This is used to prevent destruction of the output variables of an PyTensor function.

There is a mechanism in PyTensor that should prevent this, but the use of OutputGuard adds a safeguard: it may be possible for some optimization run before the add_destroy_handler phase to bypass this mechanism, by making in-place optimizations.

TODO: find a current full explanation.

destroy_map: dict[int, list[int]] = {0: [0]}[source]#

A dict that maps output indices to the input indices upon which they operate in-place.

Examples

destroy_map = {0: [1]} # first output operates in-place on second input
destroy_map = {1: [0]} # second output operates in-place on first input
class pytensor.compile.ops.ViewOp[source]#

Returns an inplace view of the input. Used internally by PyTensor.

c_code(node, nodename, inp, out, sub)[source]#

Return the C implementation of an Op.

Returns C code that does the computation associated to this Op, given names for the inputs and outputs.

Parameters:
  • node (Apply instance) – The node for which we are compiling the current C code. The same Op may be used in more than one node.

  • name (str) – A name that is automatically assigned and guaranteed to be unique.

  • inputs (list of strings) – There is a string for each input of the function, and the string is the name of a C variable pointing to that input. The type of the variable depends on the declared type of the input. There is a corresponding python variable that can be accessed by prepending "py_" to the name in the list.

  • outputs (list of strings) – Each string is the name of a C variable where the Op should store its output. The type depends on the declared type of the output. There is a corresponding Python variable that can be accessed by prepending "py_" to the name in the list. In some cases the outputs will be preallocated and the value of the variable may be pre-filled. The value for an unallocated output is type-dependent.

  • sub (dict of strings) – Extra symbols defined in CLinker sub symbols (such as 'fail').

c_code_cache_version()[source]#

Return a tuple of integers indicating the version of this Op.

An empty tuple indicates an “unversioned” Op that will not be cached between processes.

The cache mechanism may erase cached modules that have been superseded by newer versions. See ModuleCache for details.

See also

c_code_cache_version_apply

grad(args, g_outs)[source]#

Construct a graph for the gradient with respect to each input variable.

Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.

Using the reverse-mode AD characterization given in [1], for a \(C = f(A, B)\) representing the function implemented by the Op and its two arguments \(A\) and \(B\), given by the Variables in inputs, the values returned by Op.grad represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and \(\bar{B}\), for some scalar output term \(S_O\) of \(C\) in

\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]
Parameters:
  • inputs – The input variables.

  • output_grads – The gradients of the output variables.

Returns:

The gradients with respect to each Variable in inputs.

Return type:

grads

References

make_node(x)[source]#

Construct an Apply node that represent the application of this operation to the given inputs.

This must be implemented by sub-classes.

Returns:

node – The constructed Apply node.

Return type:

Apply

perform(node, inp, out)[source]#

Calculate the function on the inputs and put the variables in the output storage.

Parameters:
  • node – The symbolic Apply node that represents this computation.

  • inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each Variable in node.inputs.

  • output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sub-lists.

Notes

The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to this Op.perform(); they could’ve been allocated by another Op’s perform method. An Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.

view_map: dict[int, list[int]] = {0: [0]}[source]#

A dict that maps output indices to the input indices of which they are a view.

Examples

view_map = {0: [1]} # first output is a view of second input
view_map = {1: [0]} # second output is a view of first input
pytensor.compile.ops.as_op(itypes, otypes, infer_shape=None)[source]#

Decorator that converts a function into a basic PyTensor op that will call the supplied function as its implementation.

It takes an optional infer_shape parameter that should be a callable with this signature:

def infer_shape(fgraph, node, input_shapes):

… return output_shapes

Here input_shapes and output_shapes are lists of tuples that represent the shape of the corresponding inputs/outputs.

This should not be used when performance is a concern since the very basic nature of the resulting Op may interfere with certain graph optimizations.

Examples

@as_op(itypes=[pytensor.tensor.fmatrix, pytensor.tensor.fmatrix],

otypes=[pytensor.tensor.fmatrix])

def numpy_dot(a, b):

return numpy.dot(a, b)

pytensor.compile.ops.register_deep_copy_op_c_code(typ, code, version=())[source]#

Tell DeepCopyOp how to generate C code for an PyTensor Type.

Parameters:
  • typ (PyTensor type) – It must be the PyTensor class itself and not an instance of the class.

  • code (C code) – Deep copies the PyTensor type ‘typ’. Use %(iname)s and %(oname)s for the input and output C variable names respectively.

  • version – A number indicating the version of the code, for cache.

pytensor.compile.ops.register_view_op_c_code(type, code, version=())[source]#

Tell ViewOp how to generate C code for an PyTensor Type.

Parameters:
  • type (PyTensor type) – It must be the PyTensor class itself and not an instance of the class.

  • code (C code) – Returns a view for the PyTensor type ‘type’. Use %(iname)s and %(oname)s for the input and output C variable names respectively.

  • version – A number indicating the version of the code, for cache.