Instances of Apply represent the application of an Op to some input Variable (or variables) to produce some output Variable (or variables). They are like the application of a [symbolic] mathematical function to some [symbolic] inputs.


Broadcasting is a mechanism which allows tensors with different numbers of dimensions to be used in element-by-element (i.e. element-wise) computations. It works by (virtually) replicating the smaller tensor along the dimensions that it is lacking.

For more detail, see Broadcasting, and also * SciPy documentation about numpy’s broadcasting * OnLamp article about numpy’s broadcasting


A variable with an immutable value. For example, when you type

>>> x = pt.ivector()
>>> y = x + 3

Then a constant is created to represent the 3 in the graph.

See also: graph.basic.Constant


An element-wise operation f on two tensor variables M and N is one such that:

f(M, N)[i, j] == f(M[i, j], N[i, j])

In other words, each element of an input matrix is combined with the corresponding element of the other(s). There are no dependencies between elements whose [i, j] coordinates do not correspond, so an element-wise operation is like a scalar operation generalized along several dimensions. Element-wise operations are defined for tensors of different numbers of dimensions by broadcasting the smaller ones. The Op responsible for performing element-wise computations is Elemwise.


See Apply

Expression Graph#

A directed, acyclic set of connected Variable and Apply nodes that express symbolic functional relationship between variables. You use PyTensor by defining expression graphs, and then compiling them with pytensor.function.

See also Variable, Op, Apply, and Type, or read more about Graph Structures.


An Op is destructive–of particular input(s)–if its computation requires that one or more inputs be overwritten or otherwise invalidated. For example, inplaceOps are destructive. Destructive Ops can sometimes be faster than non-destructive alternatives. PyTensor encourages users not to put destructive Ops into graphs that are given to pytensor.function, but instead to trust the rewrites to insert destructive Ops judiciously.

Destructive Ops are indicated via a Op.destroy_map attribute. (See Op.


see expression graph


Inplace computations are computations that destroy their inputs as a side-effect. For example, if you iterate over a matrix and double every element, this is an inplace operation because when you are done, the original input has been overwritten. Ops representing inplace computations are destructive, and by default these can only be inserted by rewrites, not user code.


A Linker instance responsible for “running” the compiled function. Among other things, the linker determines whether computations are carried out with C or Python code.


A Mode instance specifying an optimizer and a linker that is passed to pytensor.function. It parametrizes how an expression graph is converted to a callable object.


The .op of an Apply, together with its symbolic inputs fully determines what kind of computation will be carried out for that Apply at run-time. Mathematical functions such as addition (i.e. pytensor.tensor.add()) and indexing x[i] are Ops in PyTensor. Much of the library documentation is devoted to describing the various Ops that are provided with PyTensor, but you can add more.

See also Variable, Type, and Apply, or read more about Graph Structures.


A function or class that transforms an PyTensor graph.


An instance of a rewriter that has the capacity to provide an improvement to the performance of a graph.


An Op is pure if it has no destructive side-effects.


The memory that is used to store the value of a Variable. In most cases storage is internal to a compiled function, but in some cases (such as constant and shared variable the storage is not internal.

Shared Variable#

A Variable whose value may be shared between multiple functions. See shared and pytensor.function.


The interface for PyTensor’s compilation from symbolic expression graphs to callable objects. See function.function().


The .type of a Variable indicates what kinds of values might be computed for it in a compiled graph. An instance that inherits from Type, and is used as the .type attribute of a Variable.

See also Variable, Op, and Apply, or read more about Graph Structures.


The the main data structure you work with when using PyTensor. For example,

>>> x = pt.ivector()
>>> y = -x**2

x and y are both Variables, i.e. instances of the Variable class.

See also Type, Op, and Apply, or read more about Graph Structures.


Some tensor Ops (such as Subtensor and DimShuffle) can be computed in constant time by simply re-indexing their inputs. The outputs of such Ops are views because their storage might be aliased to the storage of other variables (the inputs of the Apply). It is important for PyTensor to know which Variables are views of which other ones in order to introduce Destructive Ops correctly.

Ops that are views have their Op.view_map attributes set.