printing – Graph Printing and Symbolic Print Statement#

Guide#

Printing during execution#

Intermediate values in a computation cannot be printed in the normal python way with the print statement, because PyTensor has no statements. Instead there is the Print Op.

>>> from pytensor import tensor as pt, function, printing
>>> x = pt.dvector()
>>> hello_world_op = printing.Print('hello world')
>>> printed_x = hello_world_op(x)
>>> f = function([x], printed_x)
>>> r = f([1, 2, 3])
hello world __str__ = [ 1.  2.  3.]

If you print more than one thing in a function like f, they will not necessarily be printed in the order that you think. The order might even depend on which graph rewrites are applied. Strictly speaking, the order of printing is not completely defined by the interface – the only hard rule is that if the input of some print output a is ultimately used as an input to some other print input b (so that b depends on a), then a will print before b.

Printing graphs#

PyTensor provides two functions (pytensor.pp() and pytensor.printing.debugprint()) to print a graph to the terminal before or after compilation. These two functions print expression graphs in different ways: pp() is more compact and math-like, debugprint() is more verbose. PyTensor also provides pytensor.printing.pydotprint() that creates a png image of the function.

  1. The first is pytensor.pp().

>>> from pytensor import pp, grad,
>>> from pytensor import tensor as pt
>>> x = pt.dscalar('x')
>>> y = x ** 2
>>> gy = grad(y, x)
>>> pp(gy)  # print out the gradient prior to rewriting
'((fill((x ** TensorConstant{2}), TensorConstant{1.0}) * TensorConstant{2}) * (x ** (TensorConstant{2} - TensorConstant{1})))'
>>> f = function([x], gy)
>>> pp(f.maker.fgraph.outputs[0])
'(TensorConstant{2.0} * x)'

The parameter in pt.dscalar(‘x’) in the first line is the name of this variable in the graph. This name is used when printing the graph to make it more readable. If no name is provided the variable x is printed as its type as returned by x.type(). In this example - <TensorType(float64, ())>.

The name parameter can be any string. There are no naming restrictions: in particular, you can have many variables with the same name. As a convention, we generally give variables a string name that is similar to the name of the variable in local scope, but you might want to break this convention to include an object instance, or an iteration number or other kinds of information in the name.

Note

To make graphs legible, pp() hides some Ops that are actually in the graph. For example, automatic DimShuffles are not shown.

  1. The second function to print a graph is pytensor.printing.debugprint()

>>> pytensor.printing.debugprint(f.maker.fgraph.outputs[0])  
Elemwise{mul,no_inplace} [id A] ''
 |TensorConstant{2.0} [id B]
 |x [id C]

Each line printed represents a Variable in the graph. The line |x [id C] means the variable named x with debugprint identifier [id C] is an input of the Elemwise. If you accidentally have two variables called x in your graph, their different debugprint identifier will be your clue.

The line |TensorConstant{2.0} [id B] means that there is a constant 2.0 with this debugprint identifier.

The line Elemwise{mul,no_inplace} [id A] '' is indented less than the other ones, because it means there is a variable computed by multiplying the other (more indented) ones together.

The | symbol are just there to help read big graph. The group together inputs to a node.

Sometimes, you’ll see a Variable but not the inputs underneath. That can happen when that Variable has already been printed. Where else has it been printed? Look for debugprint identifier using the Find feature of your text editor.

>>> pytensor.printing.debugprint(gy)  
Elemwise{mul} [id A] ''
 |Elemwise{mul} [id B] ''
 | |Elemwise{second,no_inplace} [id C] ''
 | | |Elemwise{pow,no_inplace} [id D] ''
 | | | |x [id E]
 | | | |TensorConstant{2} [id F]
 | | |TensorConstant{1.0} [id G]
 | |TensorConstant{2} [id F]
 |Elemwise{pow} [id H] ''
   |x [id E]
   |Elemwise{sub} [id I] ''
     |TensorConstant{2} [id F]
     |InplaceDimShuffle{} [id J] ''
       |TensorConstant{1} [id K]
>>> pytensor.printing.debugprint(gy, depth=2)  
Elemwise{mul} [id A] ''
 |Elemwise{mul} [id B] ''
 |Elemwise{pow} [id C] ''

If the depth parameter is provided, it limits the number of levels that are shown.

  1. The function pytensor.printing.pydotprint() will print a compiled pytensor function to a png file.

In the image, Apply nodes (the applications of ops) are shown as ellipses and variables are shown as boxes. The number at the end of each label indicates graph position. Boxes and ovals have their own set of positions, so you can have apply #1 and also a variable #1. The numbers in the boxes (Apply nodes) are actually their position in the run-time execution order of the graph. Green ovals are inputs to the graph and blue ovals are outputs.

If your graph uses shared variables, those shared variables will appear as inputs. Future versions of the pydotprint() may distinguish these implicit inputs from explicit inputs.

If you give updates arguments when creating your function, these are added as extra inputs and outputs to the graph. Future versions of pydotprint() may distinguish these implicit inputs and outputs from explicit inputs and outputs.

Reference#

class pytensor.printing.Print(Op)[source]#

This identity-like Op has the side effect of printing a message followed by its inputs when it runs. Default behaviour is to print the __str__ representation. Optionally, one can pass a list of the input member functions to execute, or attributes to print.

__init__(message="", attrs=("__str__")[source]#
Parameters:
  • message (string) – prepend this to the output

  • attrs (list of strings) – list of input node attributes or member functions to print. Functions are identified through callable(), executed and their return value printed.

__call__(x)[source]#
Parameters:

x (a Variable) – any symbolic variable

Returns:

symbolic identity(x)

When you use the return-value from this function in an PyTensor function, running the function will print the value that x takes in the graph.

pytensor.printing.debugprint(graph_like: pytensor.graph.basic.Variable | pytensor.graph.basic.Apply | pytensor.compile.function.types.Function | pytensor.graph.fg.FunctionGraph | collections.abc.Sequence[pytensor.graph.basic.Variable | pytensor.graph.basic.Apply | pytensor.compile.function.types.Function | pytensor.graph.fg.FunctionGraph], depth: int = -1, print_type: bool = False, file: Optional[Union[Literal['str'], TextIO]] = None, id_type: Literal['id', 'int', 'CHAR', 'auto', ''] = 'CHAR', stop_on_name: bool = False, done: Optional[dict[Union[Literal['output'], pytensor.graph.basic.Variable, pytensor.graph.basic.Apply], str]] = None, print_storage: bool = False, used_ids: Optional[dict[Union[Literal['output'], pytensor.graph.basic.Variable, pytensor.graph.basic.Apply], str]] = None, print_op_info: bool = False, print_destroy_map: bool = False, print_view_map: bool = False, print_fgraph_inputs: bool = False) str | TextIO[source]#

Print a graph as text.

Each line printed represents a Variable in a graph. The indentation of lines corresponds to its depth in the symbolic graph. The first part of the text identifies whether it is an input or the output of some Apply node. The second part of the text is an identifier of the Variable.

If a Variable is encountered multiple times in the depth-first search, it is only printed recursively the first time. Later, just the Variable identifier is printed.

If an Apply node has multiple outputs, then a .N suffix will be appended to the Apply node’s identifier, indicating to which output a line corresponds.

Parameters:
  • graph_like – The object(s) to be printed.

  • depth – Print graph to this depth (-1 for unlimited).

  • print_type – If True, print the Types of each Variable in the graph.

  • file – When file extends TextIO, print to it; when file is equal to "str", return a string; when file is None, print to sys.stdout.

  • id_type

    Determines the type of identifier used for Variables:
    • "id": print the python id value,

    • "int": print integer character,

    • "CHAR": print capital character,

    • "auto": print the Variable.auto_name values,

    • "": don’t print an identifier.

  • stop_on_name – When True, if a node in the graph has a name, we don’t print anything below it.

  • done – A dict where we store the ids of printed nodes. Useful to have multiple call to debugprint share the same ids.

  • print_storage – If True, this will print the storage map for PyTensor functions. When combined with allow_gc=False, after the execution of an PyTensor function, the output will show the intermediate results.

  • used_ids – A map between nodes and their printed ids.

  • print_op_info – Print extra information provided by the relevant Ops. For example, print the tap information for Scan inputs and outputs.

  • print_destroy_map – Whether to print the destroy_maps of printed objects

  • print_view_map – Whether to print the view_maps of printed objects

  • print_fgraph_inputs – Print the inputs of FunctionGraphs.

Return type:

A string representing the printed graph, if file is a string, else file.

pytensor.pp(*args)[source]#

Just a shortcut to pytensor.printing.pp()

pytensor.printing.pp(*args)[source]#

Print to the terminal a math-like expression.

pytensor.printing.pydotprint(fct, outfile=None, compact=True, format='png', with_ids=False, high_contrast=True, cond_highlight=None, colorCodes=None, max_label_size=70, scan_graphs=False, var_with_name_simple=False, print_output_file=True, return_image=False)[source]#

Print to a file the graph of a compiled pytensor function’s ops. Supports all pydot output formats, including png and svg.

Parameters:
  • fct – a compiled PyTensor function, a Variable, an Apply or a list of Variable.

  • outfile – the output file where to put the graph.

  • compact – if True, will remove intermediate var that don’t have name.

  • format – the file format of the output.

  • with_ids – Print the toposort index of the node in the node name. and an index number in the variable ellipse.

  • high_contrast – if true, the color that describes the respective node is filled with its corresponding color, instead of coloring the border

  • colorCodes – dictionary with names of ops as keys and colors as values

  • cond_highlight – Highlights a lazy if by surrounding each of the 3 possible categories of ops with a border. The categories are: ops that are on the left branch, ops that are on the right branch, ops that are on both branches As an alternative you can provide the node that represents the lazy if

  • scan_graphs – if true it will plot the inner graph of each scan op in files with the same name as the name given for the main file to which the name of the scan op is concatenated and the index in the toposort of the scan. This index can be printed with the option with_ids.

  • var_with_name_simple – If true and a variable have a name, we will print only the variable name. Otherwise, we concatenate the type to the var name.

  • return_image

    If True, it will create the image and return it. Useful to display the image in ipython notebook.

    import pytensor
    v = pytensor.tensor.vector()
    from IPython.display import SVG
    SVG(pytensor.printing.pydotprint(v*2, return_image=True,
                                   format='svg'))
    

In the graph, ellipses are Apply Nodes (the execution of an op) and boxes are variables. If variables have names they are used as text (if multiple vars have the same name, they will be merged in the graph). Otherwise, if the variable is constant, we print its value and finally we print the type + a unique number to prevent multiple vars from being merged. We print the op of the apply in the Apply box with a number that represents the toposort order of application of those Apply. If an Apply has more than 1 input, we label each edge between an input and the Apply node with the input’s index.

Variable color code::
  • Cyan boxes are SharedVariable, inputs and/or outputs) of the graph,

  • Green boxes are inputs variables to the graph,

  • Blue boxes are outputs variables of the graph,

  • Grey boxes are variables that are not outputs and are not used,

Default apply node code::
  • Red ellipses are transfers from/to the gpu

  • Yellow are scan node

  • Brown are shape node

  • Magenta are IfElse node

  • Dark pink are elemwise node

  • Purple are subtensor

  • Orange are alloc node

For edges, they are black by default. If a node returns a view of an input, we put the corresponding input edge in blue. If it returns a destroyed input, we put the corresponding edge in red.

Note

Since October 20th, 2014, this print the inner function of all scan separately after the top level debugprint output.