graph
– Objects and functions for computational graphs#
- class pytensor.graph.op.HasInnerGraph[source]#
A mixin for an
Op
that contain an inner graph.- fgraph: FunctionGraph[source]#
A
FunctionGraph
of the inner function.
- abstract property inner_inputs: list[pytensor.graph.basic.Variable][source]#
The inner function’s inputs.
- abstract property inner_outputs: list[pytensor.graph.basic.Variable][source]#
The inner function’s outputs.
- class pytensor.graph.op.Op[source]#
A class that models and constructs operations in a graph.
A
Op
instance has several responsibilities:construct
Apply
nodes viaOp.make_node()
method,perform the numeric calculation of the modeled operation via the
Op.perform()
method,and (optionally) build the gradient-calculating sub-graphs via the
Op.grad()
method.
To see how
Op
,Type
,Variable
, andApply
fit together see the page on graph – Interface for the PyTensor graph.For more details regarding how these methods should behave: see the
Op Contract
in the sphinx docs (advanced tutorial onOp
making).- L_op(inputs: Sequence[Variable], outputs: Sequence[Variable], output_grads: Sequence[Variable]) list[pytensor.graph.basic.Variable] [source]#
Construct a graph for the L-operator.
The L-operator computes a row vector times the Jacobian.
This method dispatches to
Op.grad()
by default. In one sense, this method provides the original outputs when they’re needed to compute the return value, whereasOp.grad
doesn’t.See
Op.grad
for a mathematical explanation of the inputs and outputs of this method.
- R_op(inputs: list[pytensor.graph.basic.Variable], eval_points: Union[Variable, list[pytensor.graph.basic.Variable]]) list[pytensor.graph.basic.Variable] [source]#
Construct a graph for the R-operator.
This method is primarily used by
Rop
.- Parameters:
inputs – The
Op
inputs.eval_points – A
Variable
or list ofVariable
s with the same length as inputs. Each element ofeval_points
specifies the value of the corresponding input at the point where the R-operator is to be evaluated.
- Return type:
rval[i]
should beRop(f=f_i(inputs), wrt=inputs, eval_points=eval_points)
.
- static add_tag_trace(thing: T, user_line: Optional[int] = None) T [source]#
Add tag.trace to a node or variable.
The argument is returned after being affected (inplace).
- Parameters:
thing – The object where we add .tag.trace.
user_line – The max number of user line to keep.
Notes
We also use config.traceback__limit for the maximum number of stack level we look.
- default_output: Optional[int] = None[source]#
An
int
that specifies which outputOp.__call__()
should return. IfNone
, then all outputs are returned.A subclass should not change this class variable, but instead override it with a subclass variable or an instance variable.
- destroy_map: dict[int, list[int]] = {}[source]#
A
dict
that maps output indices to the input indices upon which they operate in-place.Examples
destroy_map = {0: [1]} # first output operates in-place on second input destroy_map = {1: [0]} # second output operates in-place on first input
- do_constant_folding(fgraph: FunctionGraph, node: Apply) bool [source]#
Determine whether or not constant folding should be performed for the given node.
This allows each
Op
to determine if it wants to be constant folded when all its inputs are constant. This allows it to choose where it puts its memory/speed trade-off. Also, it could make things faster as constants can’t be used for in-place operations (see*IncSubtensor
).- Parameters:
node (Apply) – The node for which the constant folding determination is made.
- Returns:
res
- Return type:
bool
- grad(inputs: Sequence[Variable], output_grads: Sequence[Variable]) list[pytensor.graph.basic.Variable] [source]#
Construct a graph for the gradient with respect to each input variable.
Each returned
Variable
represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of typeNullType
for that input.Using the reverse-mode AD characterization given in [1], for a \(C = f(A, B)\) representing the function implemented by the
Op
and its two arguments \(A\) and \(B\), given by theVariable
s ininputs
, the values returned byOp.grad
represent the quantities \(\bar{A} \equiv \frac{\partial S_O}{A}\) and \(\bar{B}\), for some scalar output term \(S_O\) of \(C\) in\[\operatorname{Tr}\left(\bar{C}^\top dC\right) = \operatorname{Tr}\left(\bar{A}^\top dA\right) + \operatorname{Tr}\left(\bar{B}^\top dB\right)\]- Parameters:
inputs – The input variables.
output_grads – The gradients of the output variables.
- Returns:
The gradients with respect to each
Variable
ininputs
.- Return type:
grads
References
- make_node(*inputs: Variable) Apply [source]#
Construct an
Apply
node that represent the application of this operation to the given inputs.This must be implemented by sub-classes.
- Returns:
node – The constructed
Apply
node.- Return type:
- make_py_thunk(node: Apply, storage_map: dict[pytensor.graph.basic.Variable, list[Optional[Any]]], compute_map: dict[pytensor.graph.basic.Variable, list[bool]], no_recycling: list[pytensor.graph.basic.Variable], debug: bool = False) ThunkType [source]#
Make a Python thunk.
Like
Op.make_thunk()
but only makes Python thunks.
- make_thunk(node: Apply, storage_map: dict[pytensor.graph.basic.Variable, list[Optional[Any]]], compute_map: dict[pytensor.graph.basic.Variable, list[bool]], no_recycling: list[pytensor.graph.basic.Variable], impl: Optional[str] = None) ThunkType [source]#
Create a thunk.
This function must return a thunk, that is a zero-arguments function that encapsulates the computation to be performed by this op on the arguments of the node.
- Parameters:
node – Something previously returned by
Op.make_node()
.storage_map – A
dict
mappingVariable
s to single-element lists where a computed value for eachVariable
may be found.compute_map – A
dict
mappingVariable
s to single-element lists where a boolean value can be found. The boolean indicates whether theVariable
’sstorage_map
container contains a valid value (i.e.True
) or whether it has not been computed yet (i.e.False
).no_recycling – List of
Variable
s for which it is forbidden to reuse memory allocated by a previous call.impl (str) – Description for the type of node created (e.g.
"c"
,"py"
, etc.)
Notes
If the thunk consults the
storage_map
on every call, it is safe for it to ignore theno_recycling
argument, because elements of theno_recycling
list will have a value ofNone
in thestorage_map
. If the thunk can potentially cache return values (likeCLinker
does), then it must not do so for variables in theno_recycling
list.Op.prepare_node()
is always called. If it tries'c'
and it fails, then it tries'py'
, andOp.prepare_node()
will be called twice.
- abstract perform(node: Apply, inputs: Sequence[Any], output_storage: list[list[Optional[Any]]]) None [source]#
Calculate the function on the inputs and put the variables in the output storage.
- Parameters:
node – The symbolic
Apply
node that represents this computation.inputs – Immutable sequence of non-symbolic/numeric inputs. These are the values of each
Variable
innode.inputs
.output_storage – List of mutable single-element lists (do not change the length of these lists). Each sub-list corresponds to value of each
Variable
innode.outputs
. The primary purpose of this method is to set the values of these sub-lists.
Notes
The
output_storage
list might contain data. If an element of output_storage is notNone
, it has to be of the right type, for instance, for aTensorVariable
, it has to be a NumPyndarray
with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such pre-set values were produced by a previous call to thisOp.perform()
; they could’ve been allocated by anotherOp
’sperform
method. AnOp
is free to reuseoutput_storage
as it sees fit, or to discard it and allocate new memory.
- prepare_node(node: Apply, storage_map: Optional[dict[pytensor.graph.basic.Variable, list[Optional[Any]]]], compute_map: Optional[dict[pytensor.graph.basic.Variable, list[bool]]], impl: Optional[str]) None [source]#
Make any special modifications that the
Op
needs before doingOp.make_thunk()
.This can modify the node inplace and should return nothing.
It can be called multiple time with different
impl
values.Warning
It is the
Op
’s responsibility to not re-prepare the node when it isn’t good to do so.
- pytensor.graph.op.compute_test_value(node: Apply)[source]#
Computes the test value of a node.
- Parameters:
node (Apply) – The
Apply
node for which the test value is computed.- Returns:
The
tag.test_value
s are updated in eachVariable
innode.outputs
.- Return type:
None
- pytensor.graph.op.get_test_value(v: Any) Any [source]#
Get the test value for
v
.If input
v
is not already a variable, it is turned into one by callingas_tensor_variable(v)
.- Raises:
AttributeError` if no test value is set –
- pytensor.graph.op.get_test_values(*args: Variable) Union[Any, list[Any]] [source]#
Get test values for multiple
Variable
s.Intended use:
for val_1, ..., val_n in get_debug_values(var_1, ..., var_n): if some condition on val_1, ..., val_n is not met: missing_test_message("condition was not met")
Given a list of variables,
get_debug_values
does one of three things:If the interactive debugger is off, returns an empty list
If the interactive debugger is on, and all variables have debug values, returns a list containing a single element. This single element is either:
- if there is only one variable, the element is its
value
- otherwise, a tuple containing debug values of all
the variables.
If the interactive debugger is on, and some variable does not have a debug value, issue a
missing_test_message
about the variable, and, if still in control of execution, return an empty list.
- pytensor.graph.op.missing_test_message(msg: str) None [source]#
Display a message saying that some test_value is missing.
This uses the appropriate form based on
config.compute_test_value
:- off:
The interactive debugger is off, so we do nothing.
- ignore:
The interactive debugger is set to ignore missing inputs, so do nothing.
- warn:
Display
msg
as a warning.
- Raises:
AttributeError – With msg as the exception text.