fg – Graph Container [doc TODO]#

FunctionGraph#

class pytensor.graph.fg.FunctionGraph(inputs: Optional[Sequence[Variable]] = None, outputs: Optional[Sequence[Variable]] = None, features: Optional[Sequence[Feature]] = None, clone: bool = True, update_mapping: Optional[dict[pytensor.graph.basic.Variable, pytensor.graph.basic.Variable]] = None, **clone_kwds)[source]#

A FunctionGraph represents a subgraph bound by a set of input variables and a set of output variables, ie a subgraph that specifies an PyTensor function. The inputs list should contain all the inputs on which the outputs depend. Variables of type Constant are not counted as inputs.

The FunctionGraph supports the replace operation which allows to replace a variable in the subgraph by another, e.g. replace (x + x).out by (2 * x).out. This is the basis for optimization in PyTensor.

This class is also responsible for verifying that a graph is valid (ie, all the dtypes and broadcast patterns are compatible with the way the Variables are used) and for tracking the Variables with a FunctionGraph.clients dict that specifies which Apply nodes use the Variable. The FunctionGraph.clients field, combined with the Variable.owner and each Apply.inputs, allows the graph to be traversed in both directions.

It can also be extended with new features using FunctionGraph.attach_feature(). See Feature for event types and documentation. Extra features allow the FunctionGraph to verify new properties of a graph as it is optimized.

The constructor creates a FunctionGraph which operates on the subgraph bound by the inputs and outputs sets.

This class keeps lists for the inputs and outputs and modifies them in-place.

*TODO*

Note

FunctionGraph(inputs, outputs) clones the inputs by default. To avoid this behavior, add the parameter clone=False. This is needed as we do not want cached constants in fgraph.

add_client(var: Variable, new_client: tuple[Union[pytensor.graph.basic.Apply, Literal['output']], int]) None[source]#

Update the clients of var with new_clients.

Parameters:
  • var (Variable) – The Variable to be updated.

  • new_client ((Apply, int)) – A (node, i) pair such that node.inputs[i] is var.

add_input(var: Variable, check: bool = True) None[source]#

Add a new variable as an input to this FunctionGraph.

Parameters:

var (pytensor.graph.basic.Variable) –

add_output(var: Variable, reason: Optional[str] = None, import_missing: bool = False)[source]#

Add a new variable as an output to this FunctionGraph.

attach_feature(feature: Feature) None[source]#

Add a graph.features.Feature to this function graph and trigger its on_attach callback.

change_node_input(node: Union[Apply, Literal['output']], i: int, new_var: Variable, reason: Optional[str] = None, import_missing: bool = False, check: bool = True) None[source]#

Change node.inputs[i] to new_var.

new_var.type.is_super(old_var.type) must be True, where old_var is the current value of node.inputs[i] which we want to replace.

For each feature that has an on_change_input method, this method calls: feature.on_change_input(function_graph, node, i, old_var, new_var, reason)

Parameters:
  • node – The node for which an input is to be changed. If the value is the string "output" then the self.outputs will be used instead of node.inputs.

  • i – The index in node.inputs that we want to change.

  • new_var – The new variable to take the place of node.inputs[i].

  • import_missing – Add missing inputs instead of raising an exception.

  • check – When True, perform a type check between the variable being replaced and its replacement. This is primarily used by the History Feature, which needs to revert types that have been narrowed and would otherwise fail this check.

check_integrity() None[source]#

Check the integrity of nodes in the graph.

clone(check_integrity=True) FunctionGraph[source]#

Clone the graph.

clone_get_equiv(check_integrity: bool = True, attach_feature: bool = True, **kwargs) tuple['FunctionGraph', dict[typing.Union[pytensor.graph.basic.Apply, pytensor.graph.basic.Variable, ForwardRef('Op')], typing.Union[pytensor.graph.basic.Apply, pytensor.graph.basic.Variable, ForwardRef('Op')]]][source]#

Clone the graph and return a dict that maps old nodes to new nodes.

Parameters:
  • check_integrity – Whether or not to check the resulting graph’s integrity.

  • attach_feature – Whether or not to attach self’s features to the cloned graph.

Returns:

  • e – The cloned FunctionGraph. Every node in the cloned graph is cloned.

  • equiv – A dict that maps old nodes to the new nodes.

collect_callbacks(name: str, *args) dict[pytensor.graph.features.Feature, Any][source]#

Collects callbacks

Returns a dictionary d such that d[feature] == getattr(feature, name)(*args) For each feature which has a method called after name.

execute_callbacks(name: str, *args, **kwargs) None[source]#

Execute callbacks.

Calls getattr(feature, name)(*args) for each feature which has a method called after name.

get_clients(var: Variable) list[tuple[Union[pytensor.graph.basic.Apply, Literal['output']], int]][source]#

Return a list of all the (node, i) pairs such that node.inputs[i] is var.

import_node(apply_node: Apply, check: bool = True, reason: Optional[str] = None, import_missing: bool = False) None[source]#

Recursively import everything between an Apply node and the FunctionGraph’s outputs.

Parameters:
  • apply_node (Apply) – The node to be imported.

  • check (bool) – Check that the inputs for the imported nodes are also present in the FunctionGraph.

  • reason (str) – The name of the optimization or operation in progress.

  • import_missing (bool) – Add missing inputs instead of raising an exception.

import_var(var: Variable, reason: Optional[str] = None, import_missing: bool = False) None[source]#

Import a Variable into this FunctionGraph.

This will import the var’s Apply node and inputs.

Parameters:
  • variable (pytensor.graph.basic.Variable) – The variable to be imported.

  • reason (str) – The name of the optimization or operation in progress.

  • import_missing (bool) – Add missing inputs instead of raising an exception.

orderings() dict[pytensor.graph.basic.Apply, list[pytensor.graph.basic.Apply]][source]#

Return a map of node to node evaluation dependencies.

Each key node is mapped to a list of nodes that must be evaluated before the key nodes can be evaluated.

This is used primarily by the DestroyHandler Feature to ensure that the clients of any destroyed inputs have already computed their outputs.

Notes

This only calls the Feature.orderings() method of each Feature attached to the FunctionGraph. It does not take care of computing the dependencies by itself.

remove_client(var: Variable, client_to_remove: tuple[Union[pytensor.graph.basic.Apply, Literal['output']], int], reason: Optional[str] = None, remove_if_empty: bool = False) None[source]#

Recursively remove clients of a variable.

This is the main method to remove variables or Apply nodes from a FunctionGraph.

This will remove var from the FunctionGraph if it doesn’t have any clients remaining. If it has an owner and all the outputs of the owner have no clients, it will also be removed.

Parameters:
  • var – The clients of var that will be removed.

  • client_to_remove – A (node, i) pair such that node.inputs[i] will no longer be var in this FunctionGraph.

  • remove_if_empty – When True, if var’s Apply node is removed, remove the entry for var in self.clients.

remove_feature(feature: Feature) None[source]#

Remove a feature from the graph.

Calls feature.on_detach(function_graph) if an on_detach method is defined.

remove_input(input_idx: int, reason: Optional[str] = None)[source]#

Remove the input at index input_idx.

remove_node(node: Apply, reason: Optional[str] = None)[source]#

Remove an Apply node from the FunctionGraph.

This will remove everything that depends on the outputs of node, as well as any “orphaned” variables and nodes created by node’s removal.

remove_output(output_idx: int, reason: Optional[str] = None)[source]#

Remove the output at index input_idx.

replace(var: Variable, new_var: Variable, reason: Optional[str] = None, verbose: Optional[bool] = None, import_missing: bool = False) None[source]#

Replace a variable in the FunctionGraph.

This is the main interface to manipulate the subgraph in FunctionGraph. For every node that uses var as input, makes it use new_var instead.

Parameters:
  • var – The variable to be replaced.

  • new_var – The variable to replace var.

  • reason – The name of the optimization or operation in progress.

  • verbose – Print reason, var, and new_var.

  • import_missing – Import missing variables.

replace_all(pairs: Iterable[tuple[pytensor.graph.basic.Variable, pytensor.graph.basic.Variable]], **kwargs) None[source]#

Replace variables in the FunctionGraph according to (var, new_var) pairs in a list.

setup_var(var: Variable) None[source]#

Set up a variable so it belongs to this FunctionGraph.

Parameters:

var (pytensor.graph.basic.Variable) –

toposort() list[pytensor.graph.basic.Apply][source]#

Return a toposorted list of the nodes.

Return an ordering of the graph’s Apply nodes such that:

  • all the nodes of the inputs of a node are before that node, and

  • they satisfy the additional orderings provided by FunctionGraph.orderings().

FunctionGraph Features#

class pytensor.graph.features.Feature[source]#

Base class for FunctionGraph extensions.

A Feature is an object with several callbacks that are triggered by various operations on FunctionGraphs. It can be used to enforce graph properties at all stages of graph optimization.

See also

pytensor.graph.features

for common extensions.

clone()[source]#

Create a clone that can be attached to a new FunctionGraph.

This default implementation returns self, which carries the assumption that the Feature is essentially stateless. If a subclass has state of its own that is in any way relative to a given FunctionGraph, this method should be overridden with an implementation that actually creates a fresh copy.

on_attach(fgraph)[source]#

Called by FunctionGraph.attach_feature, the method that attaches the feature to the FunctionGraph. Since this is called after the FunctionGraph is initially populated, this is where you should run checks on the initial contents of the FunctionGraph.

The on_attach method may raise the AlreadyThere exception to cancel the attach operation if it detects that another Feature instance implementing the same functionality is already attached to the FunctionGraph.

The feature has great freedom in what it can do with the fgraph: it may, for example, add methods to it dynamically.

on_change_input(fgraph, node, i, var, new_var, reason=None)[source]#

Called whenever node.inputs[i] is changed from var to new_var. At the moment the callback is done, the change has already taken place.

If you raise an exception in this function, the state of the graph might be broken for all intents and purposes.

on_detach(fgraph)[source]#

Called by FunctionGraph.remove_feature. Should remove any dynamically-added functionality that it installed into the fgraph.

on_import(fgraph, node, reason)[source]#

Called whenever a node is imported into fgraph, which is just before the node is actually connected to the graph.

Note: this is not called when the graph is created. If you want to detect the first nodes to be implemented to the graph, you should do this by implementing on_attach.

on_prune(fgraph, node, reason)[source]#

Called whenever a node is pruned (removed) from the fgraph, after it is disconnected from the graph.

orderings(fgraph)[source]#

Called by FunctionGraph.toposort. It should return a dictionary of {node: predecessors} where predecessors is a list of nodes that should be computed before the key node.

If you raise an exception in this function, the state of the graph might be broken for all intents and purposes.

FunctionGraph Feature List#

  • ReplaceValidate

  • DestroyHandler