graph – Interface for the PyTensor graph#

Reference#

Core graph classes.

class pytensor.graph.basic.Apply(op: OpType, inputs: Sequence[Variable], outputs: Sequence[Variable])[source]#

A Node representing the application of an operation to inputs.

Basically, an Apply instance is an object that represents the Python statement outputs = op(*inputs).

This class is typically instantiated by a Op.make_node method, which is called by Op.__call__.

The function pytensor.compile.function.function uses Apply.inputs together with Variable.owner to search the expression graph and determine which inputs are necessary to compute the function’s outputs.

A Linker uses the Apply instance’s op field to compute numeric values for the output variables.

Notes

The Variable.owner field of each Apply.outputs element is set to self in Apply.make_node.

If an output element has an owner that is neither None nor self, then a ValueError exception will be raised.

op[source]#

The operation that produces outputs given inputs.

inputs[source]#

The arguments of the expression modeled by the Apply node.

outputs[source]#

The outputs of the expression modeled by the Apply node.

clone(clone_inner_graph: bool = False) Apply[OpType][source]#

Clone this Apply instance.

Parameters

clone_inner_graph – If True, clone HasInnerGraph Ops and their inner-graphs.

Return type

A new Apply instance with new outputs.

Notes

Tags are copied from self to the returned instance.

clone_with_new_inputs(inputs: Sequence[Variable], strict=True, clone_inner_graph=False) Apply[OpType][source]#

Duplicate this Apply instance in a new graph.

Parameters
  • inputs (list of Variables) – List of Variable instances to use as inputs.

  • strict (bool) – If True, the type fields of all the inputs must be equal to the current ones (or compatible, for instance TensorType of the same dtype and broadcastable patterns, in which case they will be converted into current Type), and returned outputs are guaranteed to have the same types as self.outputs. If False, then there’s no guarantee that the clone’s outputs will have the same types as self.outputs, and cloning may not even be possible (it depends on the Op).

  • clone_inner_graph (bool) – If True, clone HasInnerGraph Ops and their inner-graphs.

Returns

An Apply instance with the same Op but different outputs.

Return type

object

default_output()[source]#

Returns the default output for this node.

Returns

An element of self.outputs, typically self.outputs[0].

Return type

Variable instance

Notes

May raise AttributeError self.op.default_output is out of range, or if there are multiple outputs and self.op.default_output does not exist.

get_parents()[source]#

Return a list of the parents of this node. Should return a copy–i.e., modifying the return value should not modify the graph structure.

property nin[source]#

The number of inputs.

property nout[source]#

The number of outputs.

property out[source]#

An alias for self.default_output

run_params()[source]#

Returns the params for the node, or NoParams if no params is set.

class pytensor.graph.basic.AtomicVariable(type: _TypeType, name: Optional[str] = None, **kwargs)[source]#

A node type that has no ancestors and should never be considered an input to a graph.

clone(**kwargs)[source]#

Return a new, un-owned Variable like self.

Parameters

**kwargs (dict) – Optional “name” keyword argument for the copied instance. Same as self.name if value not provided.

Returns

A new Variable instance with no owner or index.

Return type

Variable instance

Notes

Tags and names are copied to the returned instance.

equals(other)[source]#

This does what __eq__ would normally do, but Variable and Apply should always be hashable by id.

class pytensor.graph.basic.Constant(type: _TypeType, data: Any, name: Optional[str] = None)[source]#

A Variable with a fixed data field.

Constant nodes make numerous optimizations possible (e.g. constant in-lining in C code, constant folding, etc.)

Notes

The data field is filtered by what is provided in the constructor for the Constant’s type field.

clone(**kwargs)[source]#

Return a new, un-owned Variable like self.

Parameters

**kwargs (dict) – Optional “name” keyword argument for the copied instance. Same as self.name if value not provided.

Returns

A new Variable instance with no owner or index.

Return type

Variable instance

Notes

Tags and names are copied to the returned instance.

get_test_value()[source]#

Get the test value.

Raises

TestValueError

class pytensor.graph.basic.Node[source]#

A Node in an PyTensor graph.

Currently, graphs contain two kinds of Nodes: Variables and Applys. Edges in the graph are not explicitly represented. Instead each Node keeps track of its parents via Variable.owner / Apply.inputs.

get_parents()[source]#

Return a list of the parents of this node. Should return a copy–i.e., modifying the return value should not modify the graph structure.

class pytensor.graph.basic.NominalVariable(id: _IdType, typ: _TypeType, **kwargs)[source]#

A variable that enables alpha-equivalent comparisons.

clone(**kwargs)[source]#

Return a new, un-owned Variable like self.

Parameters

**kwargs (dict) – Optional “name” keyword argument for the copied instance. Same as self.name if value not provided.

Returns

A new Variable instance with no owner or index.

Return type

Variable instance

Notes

Tags and names are copied to the returned instance.

class pytensor.graph.basic.Variable(type: _TypeType, owner: OptionalApplyType, index: Optional[int] = None, name: Optional[str] = None)[source]#

A Variable is a node in an expression graph that represents a variable.

The inputs and outputs of every Apply are Variable instances. The input and output arguments to create a function are also Variable instances. A Variable is like a strongly-typed variable in some other languages; each Variable contains a reference to a Type instance that defines the kind of value the Variable can take in a computation.

A Variable is a container for four important attributes:

  • type a Type instance defining the kind of value this Variable can have,

  • owner either None (for graph roots) or the Apply instance of which self is an output,

  • index the integer such that owner.outputs[index] is this_variable (ignored if owner is None),

  • name a string to use in pretty-printing and debugging.

There are a few kinds of Variables to be aware of: A Variable which is the output of a symbolic computation has a reference to the Apply instance to which it belongs (property: owner) and the position of itself in the owner’s output list (property: index).

  • Variable (this base type) is typically the output of a symbolic computation.

  • Constant: a subclass which adds a default and un-replaceable value, and requires that owner is None.

  • TensorVariable subclass of Variable that represents a numpy.ndarray

    object.

  • TensorSharedVariable: a shared version of TensorVariable.

  • SparseVariable: a subclass of Variable that represents a scipy.sparse.{csc,csr}_matrix object.

  • RandomVariable.

A Variable which is the output of a symbolic computation will have an owner not equal to None.

Using a Variables’ owner field and an Apply node’s inputs fields, one can navigate a graph from an output all the way to the inputs. The opposite direction is possible with a FunctionGraph and its FunctionGraph.clients dict, which maps Variables to a list of their clients.

Parameters
  • type (a Type instance) – The type governs the kind of data that can be associated with this variable.

  • owner (None or Apply instance) – The Apply instance which computes the value for this variable.

  • index (None or int) – The position of this Variable in owner.outputs.

  • name (None or str) – A string for pretty-printing and debugging.

Examples

import pytensor
import pytensor.tensor as at

a = at.constant(1.5)            # declare a symbolic constant
b = at.fscalar()                # declare a symbolic floating-point scalar

c = a + b                       # create a simple expression

f = pytensor.function([b], [c])   # this works because a has a value associated with it already

assert 4.0 == f(2.5)            # bind 2.5 to an internal copy of b and evaluate an internal c

pytensor.function([a], [c])       # compilation error because b (required by c) is undefined

pytensor.function([a,b], [c])     # compilation error because a is constant, it can't be an input

The python variables a, b, c all refer to instances of type Variable. The Variable referred to by a is also an instance of Constant.

clone(**kwargs)[source]#

Return a new, un-owned Variable like self.

Parameters

**kwargs (dict) – Optional “name” keyword argument for the copied instance. Same as self.name if value not provided.

Returns

A new Variable instance with no owner or index.

Return type

Variable instance

Notes

Tags and names are copied to the returned instance.

eval(inputs_to_values=None)[source]#

Evaluate the Variable.

Parameters

inputs_to_values – A dictionary mapping PyTensor Variables to values.

Examples

>>> import numpy as np
>>> import pytensor.tensor as at
>>> x = at.dscalar('x')
>>> y = at.dscalar('y')
>>> z = x + y
>>> np.allclose(z.eval({x : 16.3, y : 12.1}), 28.4)
True

We passed eval() a dictionary mapping symbolic PyTensor Variables to the values to substitute for them, and it returned the numerical value of the expression.

Notes

eval() will be slow the first time you call it on a variable – it needs to call function() to compile the expression behind the scenes. Subsequent calls to eval() on that same variable will be fast, because the variable caches the compiled function.

This way of computing has more overhead than a normal PyTensor function, so don’t use it too much in real scripts.

get_parents()[source]#

Return a list of the parents of this node. Should return a copy–i.e., modifying the return value should not modify the graph structure.

get_test_value()[source]#

Get the test value.

Raises

TestValueError

pytensor.graph.basic.ancestors(graphs: Iterable[Variable], blockers: Optional[Collection[Variable]] = None) Generator[Variable, None, None][source]#

Return the variables that contribute to those in given graphs (inclusive).

Parameters
  • graphs (list of Variable instances) – Output Variable instances from which to search backward through owners.

  • blockers (list of Variable instances) – A collection of Variables that, when found, prevent the graph search from preceding from that point.

Yields

Variables – All input nodes, in the order found by a left-recursive depth-first search started at the nodes in graphs.

pytensor.graph.basic.apply_depends_on(apply: Apply, depends_on: Union[Apply, Collection[Apply]]) bool[source]#

Determine if any depends_on is in the graph given by apply.

Parameters
  • apply (Apply) – The Apply node to check.

  • depends_on (Union[Apply, Collection[Apply]]) – Apply nodes to check dependency on

Return type

bool

pytensor.graph.basic.applys_between(ins: Collection[Variable], outs: Iterable[Variable]) Generator[Apply, None, None][source]#

Extract the Applys contained within the sub-graph between given input and output variables.

Parameters
Yields
  • The Applys that are contained within the sub-graph that lies

  • between ins and outs, including the owners of the Variables in

  • outs and intermediary Applys between ins and outs, but not the

  • owners of the Variables in ins.

pytensor.graph.basic.as_string(inputs: ~typing.List[~pytensor.graph.basic.Variable], outputs: ~typing.List[~pytensor.graph.basic.Variable], leaf_formatter=<class 'str'>, node_formatter=<function default_node_formatter>) List[str][source]#

Returns a string representation of the subgraph between inputs and outputs.

Parameters
  • inputs (list) – Input Variables.

  • outputs (list) – Output Variables.

  • leaf_formatter (callable) – Takes a Variable and returns a string to describe it.

  • node_formatter (callable) – Takes an Op and the list of strings corresponding to its arguments and returns a string to describe it.

Returns

Returns a string representation of the subgraph between inputs and outputs. If the same node is used by several other nodes, the first occurrence will be marked as *n -> description and all subsequent occurrences will be marked as *n, where n is an id number (ids are attributed in an unspecified order and only exist for viewing convenience).

Return type

list of str

pytensor.graph.basic.clone(inputs: List[Variable], outputs: List[Variable], copy_inputs: bool = True, copy_orphans: Optional[bool] = None, clone_inner_graphs: bool = False) Tuple[Collection[Variable], Collection[Variable]][source]#

Copies the sub-graph contained between inputs and outputs.

Parameters
  • inputs – Input Variables.

  • outputs – Output Variables.

  • copy_inputs – If True, the inputs will be copied (defaults to True).

  • copy_orphans – When None, use the copy_inputs value. When True, new orphans nodes are created. When False, original orphans nodes are reused in the new graph.

  • clone_inner_graphs (bool) – If True, clone HasInnerGraph Ops and their inner-graphs.

Return type

The inputs and outputs of that copy.

Notes

A constant, if in the inputs list is not an orphan. So it will be copied conditional on the copy_inputs parameter; otherwise, it will be copied conditional on the copy_orphans parameter.

pytensor.graph.basic.clone_get_equiv(inputs: Sequence[Variable], outputs: Sequence[Variable], copy_inputs: bool = True, copy_orphans: bool = True, memo: Optional[Dict[Union[Apply, Variable, Op], Union[Apply, Variable, Op]]] = None, clone_inner_graphs: bool = False, **kwargs) Dict[Union[Apply, Variable, Op], Union[Apply, Variable, Op]][source]#

Clone the graph between inputs and outputs and return a map of the cloned objects.

This function works by recursively cloning inputs and rebuilding a directed graph from the inputs up.

If memo already contains entries for some of the objects in the graph, those objects are replaced with their values in memo and not unnecessarily cloned.

Parameters
  • inputs – Inputs of the graph to be cloned.

  • outputs – Outputs of the graph to be cloned.

  • copy_inputsTrue means to create the cloned graph from cloned input nodes. False means to clone a graph that is rooted at the original input nodes. Constants are not cloned.

  • copy_orphans – When True, inputs with no owners are cloned. When False, original inputs are reused in the new graph. Cloning is not performed for Constants.

  • memo – Optionally start with a partly-filled dictionary for the return value. If a dictionary is passed, this function will work in-place on that dictionary and return it.

  • clone_inner_graphs – If True, clone HasInnerGraph Ops and their inner-graphs.

  • kwargs – Keywords passed to Apply.clone_with_new_inputs.

pytensor.graph.basic.clone_node_and_cache(node: Apply, clone_d: Dict[Union[Apply, Variable, Op], Union[Apply, Variable, Op]], clone_inner_graphs=False, **kwargs) Optional[Apply][source]#

Clone an Apply node and cache the results in clone_d.

This function handles Op clones that are generated by inner-graph cloning.

Returns

  • None if all of node’s outputs are already in clone_d; otherwise,

  • return the clone of node.

pytensor.graph.basic.equal_computations(xs: List[Union[ndarray, Variable]], ys: List[Union[ndarray, Variable]], in_xs: Optional[List[Variable]] = None, in_ys: Optional[List[Variable]] = None) bool[source]#

Checks if PyTensor graphs represent the same computations.

The two lists xs, ys should have the same number of entries. The function checks if for any corresponding pair (x, y) from zip(xs, ys) x and y represent the same computations on the same variables (unless equivalences are provided using in_xs, in_ys).

If in_xs and in_ys are provided, then when comparing a node x with a node y they are automatically considered as equal if there is some index i such that x == in_xs[i] and y == in_ys[i] (and they both have the same type). Note that x and y can be in the list xs and ys, but also represent subgraphs of a computational graph in xs or ys.

Parameters
Return type

bool

pytensor.graph.basic.general_toposort(outputs: Iterable[T], deps: Callable[[T], Union[OrderedSet, List[T]]], compute_deps_cache: Optional[Callable[[T], Optional[Union[OrderedSet, List[T]]]]] = None, deps_cache: Optional[Dict[T, List[T]]] = None, clients: Optional[Dict[T, List[T]]] = None) List[T][source]#

Perform a topological sort of all nodes starting from a given node.

Parameters
  • deps (callable) – A Python function that takes a node as input and returns its dependence.

  • compute_deps_cache (optional) – If provided, deps_cache should also be provided. This is a function like deps, but that also caches its results in a dict passed as deps_cache.

  • deps_cache (dict) – A dict mapping nodes to their children. This is populated by compute_deps_cache.

  • clients (dict) – If a dict is passed, it will be filled with a mapping of nodes-to-clients for each node in the subgraph.

Notes

deps(i) should behave like a pure function (no funny business with internal state).

deps(i) will be cached by this function (to be fast).

The order of the return value list is determined by the order of nodes returned by the deps function.

The second option removes a Python function call, and allows for more specialized code, so it can be faster.

pytensor.graph.basic.get_var_by_name(graphs: Iterable[Variable], target_var_id: str, ids: str = 'CHAR') Tuple[Variable, ...][source]#

Get variables in a graph using their names.

Parameters
  • graphs – The graph, or graphs, to search.

  • target_var_id – The name to match against either Variable.name or Variable.auto_name.

Return type

A tuple containing all the Variables that match target_var_id.

pytensor.graph.basic.graph_inputs(graphs: Iterable[Variable], blockers: Optional[Collection[Variable]] = None) Generator[Variable, None, None][source]#

Return the inputs required to compute the given Variables.

Parameters
  • graphs (list of Variable instances) – Output Variable instances from which to search backward through owners.

  • blockers (list of Variable instances) – A collection of Variables that, when found, prevent the graph search from preceding from that point.

Yields
  • Input nodes with no owner, in the order found by a left-recursive

  • depth-first search started at the nodes in graphs.

pytensor.graph.basic.io_connection_pattern(inputs, outputs)[source]#

Return the connection pattern of a subgraph defined by given inputs and outputs.

pytensor.graph.basic.io_toposort(inputs: Iterable[Variable], outputs: Reversible[Variable], orderings: Optional[Dict[Apply, List[Apply]]] = None, clients: Optional[Dict[Variable, List[Variable]]] = None) List[Apply][source]#

Perform topological sort from input and output nodes.

Parameters
  • inputs (list or tuple of Variable instances) – Graph inputs.

  • outputs (list or tuple of Apply instances) – Graph outputs.

  • orderings (dict) – Keys are Apply instances, values are lists of Apply instances.

  • clients (dict) – If provided, it will be filled with mappings of nodes-to-clients for each node in the subgraph that is sorted.

pytensor.graph.basic.list_of_nodes(inputs: Collection[Variable], outputs: Iterable[Variable]) List[Apply][source]#

Return the Apply nodes of the graph between inputs and outputs.

Parameters
pytensor.graph.basic.op_as_string(i, op, leaf_formatter=<class 'str'>, node_formatter=<function default_node_formatter>)[source]#

Return a function that returns a string representation of the subgraph between i and op.inputs

pytensor.graph.basic.orphans_between(ins: Collection[Variable], outs: Iterable[Variable]) Generator[Variable, None, None][source]#

Extract the Variables not within the sub-graph between input and output nodes.

Parameters
Yields

Variable – The Variables upon which one or more Variables in outs depend, but are neither in ins nor in the sub-graph that lies between them.

Examples

>>> orphans_between([x], [(x+y).out])
[y]
pytensor.graph.basic.replace_nominals_with_dummies(inputs, outputs)[source]#

Replace nominal inputs with dummy variables.

When constructing a new graph with nominal inputs from an existing graph, pre-existing nominal inputs need to be replaced with dummy variables beforehand; otherwise, sequential ID ordering (i.e. when nominals are IDed based on the ordered inputs to which they correspond) of the nominals could be broken, and/or circular replacements could manifest.

FYI: This function assumes that all the nominal variables in the subgraphs between inputs and outputs are present in inputs.

pytensor.graph.basic.truncated_graph_inputs(outputs: Sequence[Variable], ancestors_to_include: Optional[Collection[Variable]] = None) List[Variable][source]#

Get the truncate graph inputs.

Unlike graph_inputs() this function will return the closest nodes to outputs that do not depend on ancestors_to_include. So given all the returned variables provided there is no missing node to compute the output and all nodes are independent from each other.

Parameters
  • outputs (Collection[Variable]) – Variable to get conditions for

  • ancestors_to_include (Optional[Collection[Variable]]) – Additional ancestors to assume, by default None

Returns

Variables required to compute outputs

Return type

List[Variable]

Examples

The returned nodes marked in (parenthesis), ancestors nodes are c, output nodes are o

  • No ancestors to include

n - n - (o)
  • One ancestors to include

n - (c) - o
  • Two ancestors to include where on depends on another, both returned

(c) - (c) - o
  • Additional nodes are present

   (c) - n - o
n - (n) -'
  • Disconnected ancestors to include not returned

(c) - n - o
 c
  • Disconnected output is present and returned

(c) - (c) - o
(o)
  • ancestors to include that include itself adds itself

n - (c) - (o/c)
pytensor.graph.basic.variable_depends_on(variable: Variable, depends_on: Union[Variable, Collection[Variable]]) bool[source]#

Determine if any depends_on is in the graph given by variable. :param variable: Node to check :type variable: Variable :param depends_on: Nodes to check dependency on :type depends_on: Collection[Variable]

Return type

bool

pytensor.graph.basic.vars_between(ins: Collection[Variable], outs: Iterable[Variable]) Generator[Variable, None, None][source]#

Extract the Variables within the sub-graph between input and output nodes.

Parameters
Yields
  • The Variables that are involved in the subgraph that lies

  • between ins and outs. This includes ins, outs,

  • orphans_between(ins, outs) and all values of all intermediary steps from

  • ins to outs.

pytensor.graph.basic.view_roots(node: Variable) List[Variable][source]#

Return the leaves from a search through consecutive view-maps.

pytensor.graph.basic.walk(nodes: ~typing.Iterable[~pytensor.graph.basic.T], expand: ~typing.Callable[[~pytensor.graph.basic.T], ~typing.Optional[~typing.Iterable[~pytensor.graph.basic.T]]], bfs: bool = True, return_children: bool = False, hash_fn: ~typing.Callable[[~pytensor.graph.basic.T], int] = <built-in function id>) Generator[Union[T, Tuple[T, Optional[Iterable[T]]]], None, None][source]#

Walk through a graph, either breadth- or depth-first.

Parameters
  • nodes – The nodes from which to start walking.

  • expand – A callable that is applied to each node in nodes, the results of which are either new nodes to visit or None.

  • bfs – If True, breath first search is used; otherwise, depth first search.

  • return_children – If True, each output node will be accompanied by the output of expand (i.e. the corresponding child nodes).

  • hash_fn – The function used to produce hashes of the elements in nodes. The default is id.

Notes

A node will appear at most once in the return value, even if it appears multiple times in the nodes parameter.