Basic Tensor Functionality#
PyTensor supports symbolic tensor expressions. When you type,
>>> import pytensor.tensor as at
>>> x = at.fmatrix()
the x
is a TensorVariable
instance.
The at.fmatrix
object itself is an instance of TensorType
.
PyTensor knows what type of variable x
is because x.type
points back to at.fmatrix
.
This section explains the various ways in which a tensor variable can be created,
the attributes and methods of TensorVariable
and TensorType
,
and various basic symbolic math and arithmetic that PyTensor supports for
tensor variables.
In general, PyTensor’s API tries to mirror NumPy’s, so, in most cases, it’s safe to assume that the basic NumPy array functions and methods will be available.
Creation#
PyTensor provides a list of predefined tensor types that can be used
to create a tensor variables. Variables can be named to facilitate debugging,
and all of these constructors accept an optional name
argument.
For example, the following each produce a TensorVariable
instance that stands
for a 0-dimensional ndarray
of integers with the name 'myvar'
:
>>> x = at.scalar('myvar', dtype='int32')
>>> x = at.iscalar('myvar')
>>> x = at.tensor(dtype='int32', shape=(), name='myvar')
>>> from pytensor.tensor.type import TensorType
>>> x = TensorType(dtype='int32', shape=())('myvar')
Constructors with optional dtype#
These are the simplest and often-preferred methods for creating symbolic
variables in your code. By default, they produce floating-point variables
(with dtype determined by pytensor.config.floatX
) so if you use
these constructors it is easy to switch your code between different levels of
floating-point precision.
- pytensor.tensor.scalar(name=None, dtype=config.floatX)[source]#
Return a
Variable
for a 0-dimensionalndarray
- pytensor.tensor.vector(name=None, dtype=config.floatX)[source]#
Return a
Variable
for a 1-dimensionalndarray
- pytensor.tensor.row(name=None, dtype=config.floatX)[source]#
Return a
Variable
for a 2-dimensionalndarray
in which the number of rows is guaranteed to be 1.
- pytensor.tensor.col(name=None, dtype=config.floatX)[source]#
Return a
Variable
for a 2-dimensionalndarray
in which the number of columns is guaranteed to be 1.
- pytensor.tensor.matrix(name=None, dtype=config.floatX)[source]#
Return a
Variable
for a 2-dimensionalndarray
- pytensor.tensor.tensor3(name=None, dtype=config.floatX)[source]#
Return a
Variable
for a 3-dimensionalndarray
- pytensor.tensor.tensor4(name=None, dtype=config.floatX)[source]#
Return a
Variable
for a 4-dimensionalndarray
- pytensor.tensor.tensor5(name=None, dtype=config.floatX)[source]#
Return a
Variable
for a 5-dimensionalndarray
All Fully-Typed Constructors#
The following TensorType
instances are provided in the pytensor.tensor
module.
They are all callable, and accept an optional name
argument. So for example:
x = at.dmatrix() # creates one Variable with no name
x = at.dmatrix('x') # creates one Variable with name 'x'
xyz = at.dmatrix('xyz') # creates one Variable with name 'xyz'
Constructor |
dtype |
ndim |
shape |
broadcastable |
---|---|---|---|---|
bscalar |
int8 |
0 |
() |
() |
bvector |
int8 |
1 |
(?,) |
(False,) |
brow |
int8 |
2 |
(1,?) |
(True, False) |
bcol |
int8 |
2 |
(?,1) |
(False, True) |
bmatrix |
int8 |
2 |
(?,?) |
(False, False) |
btensor3 |
int8 |
3 |
(?,?,?) |
(False, False, False) |
btensor4 |
int8 |
4 |
(?,?,?,?) |
(False, False, False, False) |
btensor5 |
int8 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
btensor6 |
int8 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
btensor7 |
int8 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
wscalar |
int16 |
0 |
() |
() |
wvector |
int16 |
1 |
(?,) |
(False,) |
wrow |
int16 |
2 |
(1,?) |
(True, False) |
wcol |
int16 |
2 |
(?,1) |
(False, True) |
wmatrix |
int16 |
2 |
(?,?) |
(False, False) |
wtensor3 |
int16 |
3 |
(?,?,?) |
(False, False, False) |
wtensor4 |
int16 |
4 |
(?,?,?,?) |
(False, False, False, False) |
wtensor5 |
int16 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
wtensor6 |
int16 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
wtensor7 |
int16 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
iscalar |
int32 |
0 |
() |
() |
ivector |
int32 |
1 |
(?,) |
(False,) |
irow |
int32 |
2 |
(1,?) |
(True, False) |
icol |
int32 |
2 |
(?,1) |
(False, True) |
imatrix |
int32 |
2 |
(?,?) |
(False, False) |
itensor3 |
int32 |
3 |
(?,?,?) |
(False, False, False) |
itensor4 |
int32 |
4 |
(?,?,?,?) |
(False, False, False, False) |
itensor5 |
int32 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
itensor6 |
int32 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
itensor7 |
int32 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
lscalar |
int64 |
0 |
() |
() |
lvector |
int64 |
1 |
(?,) |
(False,) |
lrow |
int64 |
2 |
(1,?) |
(True, False) |
lcol |
int64 |
2 |
(?,1) |
(False, True) |
lmatrix |
int64 |
2 |
(?,?) |
(False, False) |
ltensor3 |
int64 |
3 |
(?,?,?) |
(False, False, False) |
ltensor4 |
int64 |
4 |
(?,?,?,?) |
(False, False, False, False) |
ltensor5 |
int64 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
ltensor6 |
int64 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
ltensor7 |
int64 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
dscalar |
float64 |
0 |
() |
() |
dvector |
float64 |
1 |
(?,) |
(False,) |
drow |
float64 |
2 |
(1,?) |
(True, False) |
dcol |
float64 |
2 |
(?,1) |
(False, True) |
dmatrix |
float64 |
2 |
(?,?) |
(False, False) |
dtensor3 |
float64 |
3 |
(?,?,?) |
(False, False, False) |
dtensor4 |
float64 |
4 |
(?,?,?,?) |
(False, False, False, False) |
dtensor5 |
float64 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
dtensor6 |
float64 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
dtensor7 |
float64 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
fscalar |
float32 |
0 |
() |
() |
fvector |
float32 |
1 |
(?,) |
(False,) |
frow |
float32 |
2 |
(1,?) |
(True, False) |
fcol |
float32 |
2 |
(?,1) |
(False, True) |
fmatrix |
float32 |
2 |
(?,?) |
(False, False) |
ftensor3 |
float32 |
3 |
(?,?,?) |
(False, False, False) |
ftensor4 |
float32 |
4 |
(?,?,?,?) |
(False, False, False, False) |
ftensor5 |
float32 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
ftensor6 |
float32 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
ftensor7 |
float32 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
cscalar |
complex64 |
0 |
() |
() |
cvector |
complex64 |
1 |
(?,) |
(False,) |
crow |
complex64 |
2 |
(1,?) |
(True, False) |
ccol |
complex64 |
2 |
(?,1) |
(False, True) |
cmatrix |
complex64 |
2 |
(?,?) |
(False, False) |
ctensor3 |
complex64 |
3 |
(?,?,?) |
(False, False, False) |
ctensor4 |
complex64 |
4 |
(?,?,?,?) |
(False, False, False, False) |
ctensor5 |
complex64 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
ctensor6 |
complex64 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
ctensor7 |
complex64 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
zscalar |
complex128 |
0 |
() |
() |
zvector |
complex128 |
1 |
(?,) |
(False,) |
zrow |
complex128 |
2 |
(1,?) |
(True, False) |
zcol |
complex128 |
2 |
(?,1) |
(False, True) |
zmatrix |
complex128 |
2 |
(?,?) |
(False, False) |
ztensor3 |
complex128 |
3 |
(?,?,?) |
(False, False, False) |
ztensor4 |
complex128 |
4 |
(?,?,?,?) |
(False, False, False, False) |
ztensor5 |
complex128 |
5 |
(?,?,?,?,?) |
(False, False, False, False, False) |
ztensor6 |
complex128 |
6 |
(?,?,?,?,?,?) |
(False,) * 6 |
ztensor7 |
complex128 |
7 |
(?,?,?,?,?,?,?) |
(False,) * 7 |
Plural Constructors#
There are several constructors that can produce multiple variables at once. These are not frequently used in practice, but often used in tutorial examples to save space!
- iscalars, lscalars, fscalars, dscalars
Return one or more scalar variables.
- ivectors, lvectors, fvectors, dvectors
Return one or more vector variables.
- irows, lrows, frows, drows
Return one or more row variables.
- icols, lcols, fcols, dcols
Return one or more col variables.
- imatrices, lmatrices, fmatrices, dmatrices
Return one or more matrix variables.
Each of these plural constructors accepts
an integer or several strings. If an integer is provided, the method
will return that many Variables
and if strings are provided, it will
create one Variable
for each string, using the string as the Variable
’s
name. For example:
# Creates three matrix `Variable`s with no names
x, y, z = at.dmatrices(3)
# Creates three matrix `Variables` named 'x', 'y' and 'z'
x, y, z = at.dmatrices('x', 'y', 'z')
Custom tensor types#
If you would like to construct a tensor variable with a non-standard
broadcasting pattern, or a larger number of dimensions you’ll need to create
your own TensorType
instance. You create such an instance by passing
the dtype and broadcasting pattern to the constructor. For example, you
can create your own 8-dimensional tensor type
>>> dtensor8 = TensorType(dtype='float64', shape=(None,)*8)
>>> x = dtensor8()
>>> z = dtensor8('z')
You can also redefine some of the provided types and they will interact correctly:
>>> my_dmatrix = TensorType('float64', shape=(None,)*2)
>>> x = my_dmatrix() # allocate a matrix variable
>>> my_dmatrix == dmatrix
True
See TensorType
for more information about creating new types of
tensors.
Converting from Python Objects#
Another way of creating a TensorVariable
(a TensorSharedVariable
to be
precise) is by calling pytensor.shared()
x = pytensor.shared(np.random.standard_normal((3, 4)))
This will return a shared variable whose .value
is
a NumPy ndarray
. The number of dimensions and dtype of the Variable
are
inferred from the ndarray
argument. The argument to shared
will not be
copied, and subsequent changes will be reflected in x.value
.
For additional information, see the shared()
documentation.
Finally, when you use a NumPy ndarray
or a Python number together with
TensorVariable
instances in arithmetic expressions, the result is a
TensorVariable
. What happens to the ndarray
or the number?
PyTensor requires that the inputs to all expressions be Variable
instances, so
PyTensor automatically wraps them in a TensorConstant
.
Note
PyTensor makes a copy of any ndarray
that is used in an expression, so
subsequent changes to that ndarray
will not have any effect on the PyTensor
expression in which they’re contained.
For NumPy ndarrays
the dtype is given, but the static shape/broadcastable pattern must be
inferred. The TensorConstant
is given a type with a matching dtype,
and a static shape/broadcastable pattern with a 1
/True
for every shape
dimension that is one and None
/False
for every dimension with an unknown
shape.
For Python numbers, the static shape/broadcastable pattern is ()
but the dtype must be
inferred. Python integers are stored in the smallest dtype that can hold
them, so small constants like 1
are stored in a bscalar
.
Likewise, Python floats are stored in an fscalar
if fscalar
suffices to hold
them perfectly, but a dscalar
otherwise.
Note
When config.floatX == float32
(see config
), then Python floats
are stored instead as single-precision floats.
For fine control of this rounding policy, see
pytensor.tensor.basic.autocast_float
.
- pytensor.tensor.as_tensor_variable(x, name=None, ndim=None)[source]#
Turn an argument
x
into aTensorVariable
orTensorConstant
.Many tensor
Op
s run their arguments through this function as pre-processing. It passes throughTensorVariable
instances, and tries to wrap other objects intoTensorConstant
.When
x
is a Python number, the dtype is inferred as described above.When
x
is alist
ortuple
it is passed throughnp.asarray
If the
ndim
argument is notNone
, it must be an integer and the output will be broadcasted if necessary in order to have this many dimensions.- Return type:
TensorType
and TensorVariable
#
- class pytensor.tensor.TensorType(Type)[source]#
The
Type
class used to mark Variables that stand fornumpy.ndarray
values.numpy.memmap
, which is a subclass ofnumpy.ndarray
, is also allowed. Recalling to the tutorial, the purple box in the tutorial’s graph-structure figure is an instance of this class.- shape[source]#
- A tuple of ``None`` and integer values representing the static shape associated with this
- `Type`. ``None`` values represent unknown/non-fixed shape values.
Note
Broadcastable tuples/values are an old Theano construct that are being phased-out in PyTensor.
- broadcastable[source]#
A tuple of
True
/False
values, one for each dimension.True
in positioni
indicates that at evaluation-time, thendarray
will have size one in thati
-th dimension. Such a dimension is called a broadcastable dimension (see Broadcasting).The broadcastable pattern indicates both the number of dimensions and whether a particular dimension must have length one.
Here is a table mapping some broadcastable patterns to what they mean:
pattern
interpretation
[]
scalar
[True]
1D scalar (vector of length 1)
[True, True]
2D scalar (1x1 matrix)
[False]
vector
[False, False]
matrix
[False] * n
nD tensor
[True, False]
row (1xN matrix)
[False, True]
column (Mx1 matrix)
[False, True, False]
A Mx1xP tensor (a)
[True, False, False]
A 1xNxP tensor (b)
[False, False, False]
A MxNxP tensor (pattern of a + b)
For dimensions in which broadcasting is
False
, the length of this dimension can be one or more. For dimensions in which broadcasting isTrue
, the length of this dimension must be one.When two arguments to an element-wise operation (like addition or subtraction) have a different number of dimensions, the broadcastable pattern is expanded to the left, by padding with
True
. For example, a vector’s pattern,[False]
, could be expanded to[True, False]
, and would behave like a row (1xN matrix). In the same way, a matrix ([False, False]
) would behave like a 1xNxP tensor ([True, False, False]
).If we wanted to create a
TensorType
representing a matrix that would broadcast over the middle dimension of a 3-dimensional tensor when adding them together, we would define it like this:>>> middle_broadcaster = TensorType('complex64', shape=(None, 1, None))
- ndim[source]#
The number of dimensions that a
Variable
’s value will have at evaluation-time. This must be known when we are building the expression graph.
- dtype[source]#
A string indicating the numerical type of the
ndarray
for which aVariable
of thisType
represents.The
dtype
attribute of aTensorType
instance can be any of the following strings.dtype
domain
bits
'int8'
signed integer
8
'int16'
signed integer
16
'int32'
signed integer
32
'int64'
signed integer
64
'uint8'
unsigned integer
8
'uint16'
unsigned integer
16
'uint32'
unsigned integer
32
'uint64'
unsigned integer
64
'float32'
floating point
32
'float64'
floating point
64
'complex64'
complex
64 (two float32)
'complex128'
complex
128 (two float64)
- __init__(self, dtype, broadcastable)[source]#
If you wish to use a
Type
that is not already available (for example, a 5D tensor), you can build an appropriateType
by instantiatingTensorType
.
TensorVariable
#
- class pytensor.tensor.TensorVariable(Variable, _tensor_py_operators)[source]#
A
Variable
type that represents symbolic tensors.See
_tensor_py_operators
for most of the attributes and methods you’ll want to call.
- class pytensor.tensor.TensorConstant(Variable, _tensor_py_operators)[source]#
Python and NumPy numbers are wrapped in this type.
See
_tensor_py_operators
for most of the attributes and methods you’ll want to call.
This type is returned by
shared()
when the value to share is a NumPy ndarray.See
_tensor_py_operators
for most of the attributes and methods you’ll want to call.
- class pytensor.tensor.var._tensor_py_operators[source]#
This mix-in class adds convenient attributes, methods, and support to
TensorVariable
,TensorConstant
andTensorSharedVariable
for Python operators (see Operator Support).- type[source]#
A reference to the
TensorType
instance describing the sort of values that might be associated with this variable.
- ndim[source]
The number of dimensions of this tensor. Aliased to
TensorType.ndim
.
- dtype[source]
The numeric type of this tensor. Aliased to
TensorType.dtype
.
- reshape(shape, ndim=None)[source]
- dimshuffle(*pattern)[source]
Returns a view of this tensor with permuted dimensions. Typically the pattern will include the integers
0, 1, ... ndim-1
, and any number of'x'
characters in dimensions where this tensor should be broadcasted.A few examples of patterns and their effect:
('x',)
: make a 0d (scalar) into a 1d vector(0, 1)
: identity for 2d vectors(1, 0)
: inverts the first and second dimensions('x', 0)
: make a row out of a 1d vector (N to 1xN)(0, 'x')
: make a column out of a 1d vector (N to Nx1)(2, 0, 1)
: AxBxC to CxAxB(0, 'x', 1)
: AxB to Ax1xB(1, 'x', 0)
: AxB to Bx1xA(1,)
: This removes the dimension at index 0. It must be a broadcastable dimension.
- flatten(ndim=1)[source]#
Returns a view of this tensor with
ndim
dimensions, whose shape for the firstndim-1
dimensions will be the same asself
, and shape in the remaining dimension will be expanded to fit in all the data fromself
.See
flatten()
.
- {any,all}(axis=None, keepdims=False)
- {sum,prod,mean}(axis=None, dtype=None, keepdims=False, acc_dtype=None)
- {var,std,min,max,argmin,argmax}(axis=None, keepdims=False),
- copy()[source]
Return a new symbolic variable that is a copy of the variable. Does not copy the tag.
- nonzero(self, return_matrix=False)[source]
- nonzero_values(self)[source]
- sort(self, axis=-1, kind='quicksort', order=None)[source]
- argsort(self, axis=-1, kind='quicksort', order=None)[source]
- clip(self, a_min, a_max) with a_min <= a_max
- repeat(repeats, axis=None)[source]
- round(mode='half_away_from_zero')[source]
- zeros_like(model, dtype=None)[source]#
All the above methods are equivalent to NumPy for PyTensor on the current tensor.
- __{abs,neg,lt,le,gt,ge,invert,and,or,add,sub,mul,div,truediv,floordiv}__
Those elemwise operation are supported via Python syntax.
- choose(choices, mode='raise')[source]#
Construct an array from an index array and a set of arrays to choose from.
- copy(name=None)[source]#
Return a symbolic copy and optionally assign a name.
Does not copy the tags.
- dimshuffle(*pattern)[source]#
Reorder the dimensions of this variable, optionally inserting broadcasted dimensions.
- Parameters:
pattern – List/tuple of int mixed with ‘x’ for broadcastable dimensions.
Examples
For example, to create a 3D view of a [2D] matrix, call
dimshuffle([0,'x',1])
. This will create a 3D view such that the middle dimension is an implicit broadcasted dimension. To do the same thing on the transpose of that matrix, calldimshuffle([1, 'x', 0])
.Notes
This function supports the pattern passed as a tuple, or as a variable-length argument (e.g.
a.dimshuffle(pattern)
is equivalent toa.dimshuffle(*pattern)
wherepattern
is a list/tuple of ints mixed with ‘x’ characters).See also
DimShuffle
- property imag: Union[Variable, List[Variable]][source]#
Return imaginary component of complex-valued tensor
z
.Generalizes a scalar
Op
to tensors.All the inputs must have the same number of dimensions. When the
Op
is performed, for each dimension, each input’s size for that dimension must be the same. As a special case, it can also be one but only if the input’sbroadcastable
flag isTrue
for that dimension. In that case, the tensor is (virtually) replicated along that dimension to match the size of the others.The dtypes of the outputs mirror those of the scalar
Op
that is being generalized to tensors. In particular, if the calculations for an output are done in-place on an input, the output type must be the same as the corresponding input type (see the doc ofScalarOp
to get help about controlling the output type)Notes
-
Elemwise(add)
: represents+
on tensorsx + y
-Elemwise(add, {0 : 0})
: represents the+=
operationx += y
-Elemwise(add, {0 : 1})
: represents+=
on the second argumenty += x
-Elemwise(mul)(np.random.random((10, 5)), np.random.random((1, 5)))
: the second input is completed along the first dimension to match the first input -Elemwise(true_div)(np.random.random(10, 5), np.random.random(10, 1))
: same but along the second dimension -Elemwise(int_div)(np.random.random((1, 5)), np.random.random((10, 1)))
: the output has size(10, 5)
. -Elemwise(log)(np.random.random((3, 4, 5)))
- mean(axis=None, dtype=None, keepdims=False, acc_dtype=None)[source]#
See
pytensor.tensor.math.mean()
.
- prod(axis=None, dtype=None, keepdims=False, acc_dtype=None)[source]#
See
pytensor.tensor.math.prod()
.
- property real: Union[Variable, List[Variable]][source]#
Return real component of complex-valued tensor
z
.Generalizes a scalar
Op
to tensors.All the inputs must have the same number of dimensions. When the
Op
is performed, for each dimension, each input’s size for that dimension must be the same. As a special case, it can also be one but only if the input’sbroadcastable
flag isTrue
for that dimension. In that case, the tensor is (virtually) replicated along that dimension to match the size of the others.The dtypes of the outputs mirror those of the scalar
Op
that is being generalized to tensors. In particular, if the calculations for an output are done in-place on an input, the output type must be the same as the corresponding input type (see the doc ofScalarOp
to get help about controlling the output type)Notes
-
Elemwise(add)
: represents+
on tensorsx + y
-Elemwise(add, {0 : 0})
: represents the+=
operationx += y
-Elemwise(add, {0 : 1})
: represents+=
on the second argumenty += x
-Elemwise(mul)(np.random.random((10, 5)), np.random.random((1, 5)))
: the second input is completed along the first dimension to match the first input -Elemwise(true_div)(np.random.random(10, 5), np.random.random(10, 1))
: same but along the second dimension -Elemwise(int_div)(np.random.random((1, 5)), np.random.random((10, 1)))
: the output has size(10, 5)
. -Elemwise(log)(np.random.random((3, 4, 5)))
- reshape(shape, ndim=None)[source]#
Return a reshaped view/copy of this variable.
- Parameters:
shape – Something that can be converted to a symbolic vector of integers.
ndim – The length of the shape. Passing None here means for PyTensor to try and guess the length of
shape
.
Warning
This has a different signature than numpy’s ndarray.reshape! In numpy you do not need to wrap the shape arguments in a tuple, in pytensor you do need to.
- squeeze()[source]#
Remove broadcastable dimensions from the shape of an array.
It returns the input array, but with the broadcastable dimensions removed. This is always
x
itself or a view intox
.
- swapaxes(axis1, axis2)[source]#
See
pytensor.tensor.basic.swapaxes()
.If a matrix is provided with the right axes, its transpose will be returned.
- transfer(target)[source]#
Transfer this this array’s data to another device.
If
target
is'cpu'
this will transfer to a TensorType (if not already one). Other types may define additional targets.- Parameters:
target (str) – The desired location of the output variable
Shaping and Shuffling#
To re-order the dimensions of a variable, to insert or remove broadcastable
dimensions, see _tensor_py_operators.dimshuffle()
.
- pytensor.tensor.reshape(x, newshape, ndim=None)[source]
- type x:
any
TensorVariable
(or compatible)- param x:
variable to be reshaped
- type newshape:
lvector
(or compatible)- param newshape:
the new shape for
x
- param ndim:
optional - the length that
newshape
’s value will have. If this isNone
, thenreshape
will infer it fromnewshape
.- rtype:
variable with
x
’s dtype, butndim
dimensions
Note
This function can infer the length of a symbolic
newshape
value in some cases, but if it cannot and you do not provide thendim
, then this function will raise an Exception.
- pytensor.tensor.shape_padleft(x, n_ones=1)[source]#
Reshape
x
by left padding the shape withn_ones
1s. Note that all this new dimension will be broadcastable. To make them non-broadcastable see theunbroadcast()
.- Parameters:
x (any
TensorVariable
(or compatible)) – variable to be reshaped
- pytensor.tensor.shape_padright(x, n_ones=1)[source]#
Reshape
x
by right padding the shape withn_ones
ones. Note that all this new dimension will be broadcastable. To make them non-broadcastable see theunbroadcast()
.- Parameters:
x (any TensorVariable (or compatible)) – variable to be reshaped
- pytensor.tensor.shape_padaxis(t, axis)[source]#
Reshape
t
by inserting1
at the dimensionaxis
. Note that this new dimension will be broadcastable. To make it non-broadcastable see theunbroadcast()
.- Parameters:
x (any
TensorVariable
(or compatible)) – variable to be reshapedaxis (int) – axis where to add the new dimension to
x
Example:
>>> tensor = pytensor.tensor.type.tensor3() >>> pytensor.tensor.shape_padaxis(tensor, axis=0) InplaceDimShuffle{x,0,1,2}.0 >>> pytensor.tensor.shape_padaxis(tensor, axis=1) InplaceDimShuffle{0,x,1,2}.0 >>> pytensor.tensor.shape_padaxis(tensor, axis=3) InplaceDimShuffle{0,1,2,x}.0 >>> pytensor.tensor.shape_padaxis(tensor, axis=-1) InplaceDimShuffle{0,1,2,x}.0
- pytensor.tensor.specify_shape(x, shape)[source]#
Specify a fixed shape for a
Variable
.If a dimension’s shape value is
None
, the size of that dimension is not considered fixed/static at runtime.
- pytensor.tensor.flatten(x, ndim=1)[source]#
Similar to
reshape()
, but the shape is inferred from the shape ofx
.- Parameters:
x (any
TensorVariable
(or compatible)) – variable to be flattenedndim (int) – the number of dimensions in the returned variable
- Return type:
variable with same dtype as
x
andndim
dimensions- Returns:
variable with the same shape as
x
in the leadingndim-1
dimensions, but with all remaining dimensions ofx
collapsed into the last dimension.
For example, if we flatten a tensor of shape
(2, 3, 4, 5)
withflatten(x, ndim=2)
, then we’ll have the same (i.e.2-1=1
) leading dimensions(2,)
, and the remaining dimensions are collapsed, so the output in this example would have shape(2, 60)
.
- pytensor.tensor.tile(x, reps, ndim=None)[source]#
Construct an array by repeating the input
x
according toreps
pattern.Tiles its input according to
reps
. The length ofreps
is the number of dimension ofx
and contains the number of times to tilex
in each dimension.- See:
numpy.tile documentation for examples.
- See:
- Note:
Currently,
reps
must be a constant,x.ndim
andlen(reps)
must be equal and, if specified,ndim
must be equal to both.
- pytensor.tensor.roll(x, shift, axis=None)[source]#
Convenience function to roll TensorTypes along the given axis.
Syntax copies numpy.roll function.
- Parameters:
x (tensor_like) – Input tensor.
shift (int (symbolic or literal)) – The number of places by which elements are shifted.
axis (int (symbolic or literal), optional) – The axis along which elements are shifted. By default, the array is flattened before shifting, after which the original shape is restored.
- Returns:
Output tensor, with the same shape as
x
.- Return type:
tensor
Creating Tensors#
- pytensor.tensor.zeros_like(x, dtype=None)[source]#
- Parameters:
x – tensor that has the same shape as output
dtype – data-type, optional By default, it will be x.dtype.
Returns a tensor the shape of
x
filled with zeros of the type ofdtype
.
- pytensor.tensor.ones_like(x)[source]#
- Parameters:
x – tensor that has the same shape as output
dtype – data-type, optional By default, it will be
x.dtype
.
Returns a tensor the shape of
x
filled with ones of the type ofdtype
.
- pytensor.tensor.zeros(shape, dtype=None)[source]#
- Parameters:
shape – a tuple/list of scalar with the shape information.
dtype – the dtype of the new tensor. If
None
, will use"floatX"
.
Returns a tensor filled with zeros of the provided shape.
- pytensor.tensor.ones(shape, dtype=None)[source]#
- Parameters:
shape – a tuple/list of scalar with the shape information.
dtype – the dtype of the new tensor. If
None
, will use"floatX"
.
Returns a tensor filled with ones of the provided shape.
- pytensor.tensor.fill(a, b)[source]#
- Parameters:
a – tensor that has same shape as output
b – PyTensor scalar or value with which you want to fill the output
Create a matrix by filling the shape of
a
withb
.
- pytensor.tensor.alloc(value, *shape)[source]#
- Parameters:
value – a value with which to fill the output
shape – the dimensions of the returned array
- Returns:
an N-dimensional tensor initialized by
value
and having the specified shape.
- pytensor.tensor.eye(n, m=None, k=0, dtype=pytensor.config.floatX)[source]#
- Parameters:
n – number of rows in output (value or PyTensor scalar)
m – number of columns in output (value or PyTensor scalar)
k – Index of the diagonal:
0
refers to the main diagonal, a positive value refers to an upper diagonal, and a negative value to a lower diagonal. It can be an PyTensor scalar.
- Returns:
An array where all elements are equal to zero, except for the
k
-th diagonal, whose values are equal to one.
- pytensor.tensor.identity_like(x, dtype=None)[source]#
- Parameters:
x – tensor
dtype – The dtype of the returned tensor. If
None
, default to dtype ofx
- Returns:
A tensor of same shape as
x
that is filled with zeros everywhere except for the main diagonal, whose values are equal to one. The output will have same dtype asx
unless overridden indtype
.
- pytensor.tensor.stack(tensors, axis=0)[source]#
Stack tensors in sequence on given axis (default is
0
).Take a sequence of tensors and stack them on given axis to make a single tensor. The size in dimension
axis
of the result will be equal to the number of tensors passed.- Parameters:
tensors – a list or a tuple of one or more tensors of the same rank.
axis – the axis along which the tensors will be stacked. Default value is
0
.
- Returns:
A tensor such that
rval[0] == tensors[0]
,rval[1] == tensors[1]
, etc.
Examples:
>>> a = pytensor.tensor.type.scalar() >>> b = pytensor.tensor.type.scalar() >>> c = pytensor.tensor.type.scalar() >>> x = pytensor.tensor.stack([a, b, c]) >>> x.ndim # x is a vector of length 3. 1 >>> a = pytensor.tensor.type.tensor4() >>> b = pytensor.tensor.type.tensor4() >>> c = pytensor.tensor.type.tensor4() >>> x = pytensor.tensor.stack([a, b, c]) >>> x.ndim # x is a 5d tensor. 5 >>> rval = x.eval(dict((t, np.zeros((2, 2, 2, 2))) for t in [a, b, c])) >>> rval.shape # 3 tensors are stacked on axis 0 (3, 2, 2, 2, 2)
We can also specify different axis than default value
0
:>>> x = pytensor.tensor.stack([a, b, c], axis=3) >>> x.ndim 5 >>> rval = x.eval(dict((t, np.zeros((2, 2, 2, 2))) for t in [a, b, c])) >>> rval.shape # 3 tensors are stacked on axis 3 (2, 2, 2, 3, 2) >>> x = pytensor.tensor.stack([a, b, c], axis=-2) >>> x.ndim 5 >>> rval = x.eval(dict((t, np.zeros((2, 2, 2, 2))) for t in [a, b, c])) >>> rval.shape # 3 tensors are stacked on axis -2 (2, 2, 2, 3, 2)
- pytensor.tensor.stack(*tensors)[source]
Warning
The interface
stack(*tensors)
is deprecated! Usestack(tensors, axis=0)
instead.Stack tensors in sequence vertically (row wise).
Take a sequence of tensors and stack them vertically to make a single tensor.
- param tensors:
one or more tensors of the same rank
- returns:
A tensor such that
rval[0] == tensors[0]
,rval[1] == tensors[1]
, etc.
>>> x0 = at.scalar() >>> x1 = at.scalar() >>> x2 = at.scalar() >>> x = at.stack(x0, x1, x2) >>> x.ndim # x is a vector of length 3. 1
- pytensor.tensor.concatenate(tensor_list, axis=0)[source]#
- Parameters:
tensor_list (a list or tuple of Tensors that all have the same shape in the axes not specified by the
axis
argument.) – one or more Tensors to be concatenated together into one.axis (literal or symbolic integer) – Tensors will be joined along this axis, so they may have different
shape[axis]
>>> x0 = at.fmatrix() >>> x1 = at.ftensor3() >>> x2 = at.fvector() >>> x = at.concatenate([x0, x1[0], at.shape_padright(x2)], axis=1) >>> x.ndim 2
- pytensor.tensor.stacklists(tensor_list)[source]#
- Parameters:
tensor_list (an iterable that contains either tensors or other iterables of the same type as
tensor_list
(in other words, this is a tree whose leaves are tensors).) – tensors to be stacked together.
Recursively stack lists of tensors to maintain similar structure.
This function can create a tensor from a shaped list of scalars:
>>> from pytensor.tensor import stacklists, scalars, matrices >>> from pytensor import function >>> a, b, c, d = scalars('abcd') >>> X = stacklists([[a, b], [c, d]]) >>> f = function([a, b, c, d], X) >>> f(1, 2, 3, 4) array([[ 1., 2.], [ 3., 4.]])
We can also stack arbitrarily shaped tensors. Here we stack matrices into a 2 by 2 grid:
>>> from numpy import ones >>> a, b, c, d = matrices('abcd') >>> X = stacklists([[a, b], [c, d]]) >>> f = function([a, b, c, d], X) >>> x = ones((4, 4), 'float32') >>> f(x, x, x, x).shape (2, 2, 4, 4)
- pytensor.tensor.basic.choose(a, choices, mode='raise')[source]#
Construct an array from an index array and a set of arrays to choose from.
First of all, if confused or uncertain, definitely look at the Examples - in its full generality, this function is less simple than it might seem from the following code description (below ndi = numpy.lib.index_tricks):
np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)]).
But this omits some subtleties. Here is a fully general summary:
Given an
index
array (a) of integers and a sequence of n arrays (choices), a and each choice array are first broadcast, as necessary, to arrays of a common shape; calling these Ba and Bchoices[i], i = 0,…,n-1 we have that, necessarily, Ba.shape == Bchoices[i].shape for each i. Then, a new array with shape Ba.shape is created as follows:if mode=raise (the default), then, first of all, each element of a (and thus Ba) must be in the range [0, n-1]; now, suppose that i (in that range) is the value at the (j0, j1, …, jm) position in Ba - then the value at the same position in the new array is the value in Bchoices[i] at that same position;
if mode=wrap, values in a (and thus Ba) may be any (signed) integer; modular arithmetic is used to map integers outside the range [0, n-1] back into that range; and then the new array is constructed as above;
if mode=clip, values in a (and thus Ba) may be any (signed) integer; negative integers are mapped to 0; values greater than n-1 are mapped to n-1; and then the new array is constructed as above.
- Parameters:
a (int array) – This array must contain integers in [0, n-1], where n is the number of choices, unless mode=wrap or mode=clip, in which cases any integers are permissible.
choices (sequence of arrays) – Choice arrays. a and all of the choices must be broadcastable to the same shape. If choices is itself an array (not recommended), then its outermost dimension (i.e., the one corresponding to choices.shape[0]) is taken as defining the
sequence
.mode ({
raise
(default),wrap
,clip
}, optional) – Specifies how indices outside [0, n-1] will be treated:raise
: an exception is raisedwrap
: value becomes value mod nclip
: values < 0 are mapped to 0, values > n-1 are mapped to n-1
- Returns:
The merged result.
- Return type:
merged_array - array
- Raises:
ValueError - shape mismatch – If a and each choice array are not all broadcastable to the same shape.
Reductions#
- pytensor.tensor.max(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to compute the maximum
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
maximum of x along axis
- axis can be:
None - in which case the maximum is computed along all axes (like NumPy)
an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.argmax(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis along which to compute the index of the maximum
- Parameter:
keepdims - (boolean) If this is set to True, the axis which is reduced is left in the result as a dimension with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
the index of the maximum value along a given axis
if
axis == None
,argmax
over the flattened tensor (like NumPy)
- pytensor.tensor.max_and_argmax(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis along which to compute the maximum and its index
- Parameter:
keepdims - (boolean) If this is set to True, the axis which is reduced is left in the result as a dimension with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
the maximum value along a given axis and its index.
if
axis == None
,max_and_argmax
over the flattened tensor (like NumPy)
- pytensor.tensor.min(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to compute the minimum
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
minimum of x along axis
axis
can be:None
- in which case the minimum is computed along all axes (like NumPy)an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.argmin(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis along which to compute the index of the minimum
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
the index of the minimum value along a given axis
if
axis == None
,argmin
over the flattened tensor (like NumPy)
- pytensor.tensor.sum(x, axis=None, dtype=None, keepdims=False, acc_dtype=None)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to compute the sum
- Parameter:
dtype - The dtype of the returned tensor. If None, then we use the default dtype which is the same as the input tensor’s dtype except when:
the input dtype is a signed integer of precision < 64 bit, in which case we use int64
the input dtype is an unsigned integer of precision < 64 bit, in which case we use uint64
This default dtype does _not_ depend on the value of “acc_dtype”.
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Parameter:
acc_dtype - The dtype of the internal accumulator. If None (default), we use the dtype in the list below, or the input dtype if its precision is higher:
for int dtypes, we use at least int64;
for uint dtypes, we use at least uint64;
for float dtypes, we use at least float64;
for complex dtypes, we use at least complex128.
- Returns:
sum of x along axis
axis
can be:None
- in which case the sum is computed along all axes (like NumPy)an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.prod(x, axis=None, dtype=None, keepdims=False, acc_dtype=None, no_zeros_in_input=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to compute the product
- Parameter:
dtype - The dtype of the returned tensor. If None, then we use the default dtype which is the same as the input tensor’s dtype except when:
the input dtype is a signed integer of precision < 64 bit, in which case we use int64
the input dtype is an unsigned integer of precision < 64 bit, in which case we use uint64
This default dtype does _not_ depend on the value of “acc_dtype”.
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Parameter:
acc_dtype - The dtype of the internal accumulator. If None (default), we use the dtype in the list below, or the input dtype if its precision is higher:
for int dtypes, we use at least int64;
for uint dtypes, we use at least uint64;
for float dtypes, we use at least float64;
for complex dtypes, we use at least complex128.
- Parameter:
no_zeros_in_input - The grad of prod is complicated as we need to handle 3 different cases: without zeros in the input reduced group, with 1 zero or with more zeros.
This could slow you down, but more importantly, we currently don’t support the second derivative of the 3 cases. So you cannot take the second derivative of the default prod().
To remove the handling of the special cases of 0 and so get some small speed up and allow second derivative set
no_zeros_in_inputs
toTrue
. It defaults toFalse
.It is the user responsibility to make sure there are no zeros in the inputs. If there are, the grad will be wrong.
- Returns:
product of every term in x along axis
axis
can be:None
- in which case the sum is computed along all axes (like NumPy)an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.mean(x, axis=None, dtype=None, keepdims=False, acc_dtype=None)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to compute the mean
- Parameter:
dtype - The dtype to cast the result of the inner summation into. For instance, by default, a sum of a float32 tensor will be done in float64 (acc_dtype would be float64 by default), but that result will be casted back in float32.
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Parameter:
acc_dtype - The dtype of the internal accumulator of the inner summation. This will not necessarily be the dtype of the output (in particular if it is a discrete (int/uint) dtype, the output will be in a float type). If None, then we use the same rules as
sum()
.- Returns:
mean value of x along axis
axis
can be:None
- in which case the mean is computed along all axes (like NumPy)an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.var(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to compute the variance
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
variance of x along axis
axis
can be:None
- in which case the variance is computed along all axes (like NumPy)an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.std(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to compute the standard deviation
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
variance of x along axis
axis
can be:None
- in which case the standard deviation is computed along all axes (like NumPy)an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.all(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to apply ‘bitwise and’
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
bitwise and of x along axis
axis
can be:None
- in which case the ‘bitwise and’ is computed along all axes (like NumPy)an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.any(x, axis=None, keepdims=False)[source]#
- Parameter:
x - symbolic Tensor (or compatible)
- Parameter:
axis - axis or axes along which to apply bitwise or
- Parameter:
keepdims - (boolean) If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the original tensor.
- Returns:
bitwise or of x along axis
axis
can be:None
- in which case the ‘bitwise or’ is computed along all axes (like NumPy)an int - computed along this axis
a list of ints - computed along these axes
- pytensor.tensor.ptp(x, axis=None)[source]#
Range of values (maximum - minimum) along an axis. The name of the function comes from the acronym for peak to peak.
- Parameter:
x Input tensor.
- Parameter:
axis Axis along which to find the peaks. By default, flatten the array.
- Returns:
A new array holding the result.
Indexing#
Like NumPy, PyTensor distinguishes between basic and advanced indexing. PyTensor fully supports basic indexing (see NumPy’s indexing) and integer advanced indexing.
Index-assignment is not supported. If you want to do something like a[5]
= b
or a[5]+=b
, see pytensor.tensor.subtensor.set_subtensor()
and
pytensor.tensor.subtensor.inc_subtensor()
below.
- pytensor.tensor.subtensor.set_subtensor(x, y, inplace=False, tolerate_inplace_aliasing=False)[source]#
Return x with the given subtensor overwritten by y.
- Parameters:
x – Symbolic variable for the lvalue of = operation.
y – Symbolic variable for the rvalue of = operation.
tolerate_inplace_aliasing – See inc_subtensor for documentation.
Examples
To replicate the numpy expression “r[10:] = 5”, type
>>> r = ivector() >>> new_r = set_subtensor(r[10:], 5)
- pytensor.tensor.subtensor.inc_subtensor(x, y, inplace=False, set_instead_of_inc=False, tolerate_inplace_aliasing=False, ignore_duplicates=False)[source]#
Update the value of an indexed array by a given amount.
This is equivalent to
x[indices] += y
ornp.add.at(x, indices, y)
, depending on the value ofignore_duplicates
.- Parameters:
x – The symbolic result of a Subtensor operation.
y – The amount by which to increment the array.
inplace – Don’t use. PyTensor will do in-place operations itself, when possible.
set_instead_of_inc – If True, do a set_subtensor instead.
tolerate_inplace_aliasing – Allow
x
andy
to be views of a single underlying array even while working in-place. For correct results,x
andy
must not be overlapping views; if they overlap, the result of thisOp
will generally be incorrect. This value has no effect ifinplace=False
.ignore_duplicates – This determines whether or not
x[indices] += y
is used ornp.add.at(x, indices, y)
. When the special duplicates handling ofnp.add.at
isn’t required, setting this option toTrue
(i.e. usingx[indices] += y
) can resulting in faster compiled graphs.
Examples
To replicate the expression
r[10:] += 5
:..code-block:: python
r = ivector() new_r = inc_subtensor(r[10:], 5)
To replicate the expression
r[[0, 1, 0]] += 5
:..code-block:: python
r = ivector() new_r = inc_subtensor(r[10:], 5, ignore_duplicates=True)
Operator Support#
Many Python operators are supported.
>>> a, b = at.itensor3(), at.itensor3() # example inputs
Arithmetic#
>>> a + 3 # at.add(a, 3) -> itensor3
>>> 3 - a # at.sub(3, a)
>>> a * 3.5 # at.mul(a, 3.5) -> ftensor3 or dtensor3 (depending on casting)
>>> 2.2 / a # at.truediv(2.2, a)
>>> 2.2 // a # at.intdiv(2.2, a)
>>> 2.2**a # at.pow(2.2, a)
>>> b % a # at.mod(b, a)
Bitwise#
>>> a & b # at.and_(a,b) bitwise and (alias at.bitwise_and)
>>> a ^ 1 # at.xor(a,1) bitwise xor (alias at.bitwise_xor)
>>> a | b # at.or_(a,b) bitwise or (alias at.bitwise_or)
>>> ~a # at.invert(a) bitwise invert (alias at.bitwise_not)
Inplace#
In-place operators are not supported. PyTensor’s graph rewrites
will determine which intermediate values to use for in-place
computations. If you would like to update the value of a
shared variable, consider using the updates
argument to
PyTensor.function()
.
Elemwise
#
Casting#
- pytensor.tensor.cast(x, dtype)[source]#
Cast any tensor
x
to a tensor of the same shape, but with a different numerical typedtype
.This is not a reinterpret cast, but a coercion
cast
, similar tonumpy.asarray(x, dtype=dtype)
.import pytensor.tensor as at x = at.matrix() x_as_int = at.cast(x, 'int32')
Attempting to casting a complex value to a real value is ambiguous and will raise an exception. Use
real
,imag
,abs
, orangle
.
Comparisons#
- The six usual equality and inequality operators share the same interface.
- Parameter:
a - symbolic Tensor (or compatible)
- Parameter:
b - symbolic Tensor (or compatible)
- Return type:
symbolic Tensor
- Returns:
a symbolic tensor representing the application of the logical
Elemwise
operator.
Note
PyTensor has no boolean dtype. Instead, all boolean tensors are represented in
'int8'
.Here is an example with the less-than operator.
import pytensor.tensor as at x,y = at.dmatrices('x','y') z = at.le(x,y)
- pytensor.tensor.lt(a, b)[source]#
Returns a symbolic
'int8'
tensor representing the result of logical less-than (a<b).Also available using syntax
a < b
- pytensor.tensor.gt(a, b)[source]#
Returns a symbolic
'int8'
tensor representing the result of logical greater-than (a>b).Also available using syntax
a > b
- pytensor.tensor.le(a, b)[source]#
Returns a variable representing the result of logical less than or equal (a<=b).
Also available using syntax
a <= b
- pytensor.tensor.ge(a, b)[source]#
Returns a variable representing the result of logical greater or equal than (a>=b).
Also available using syntax
a >= b
- pytensor.tensor.eq(a, b)[source]#
Returns a variable representing the result of logical equality (a==b).
- pytensor.tensor.neq(a, b)[source]#
Returns a variable representing the result of logical inequality (a!=b).
- pytensor.tensor.isnan(a)[source]#
Returns a variable representing the comparison of
a
elements with nan.This is equivalent to
numpy.isnan
.
- pytensor.tensor.isinf(a)[source]#
Returns a variable representing the comparison of
a
elements with inf or -inf.This is equivalent to
numpy.isinf
.
- pytensor.tensor.isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False)[source]#
Returns a symbolic
'int8'
tensor representing where two tensors are equal within a tolerance.The tolerance values are positive, typically very small numbers. The relative difference
(rtol * abs(b))
and the absolute differenceatol
are added together to compare against the absolute difference betweena
andb
.For finite values, isclose uses the following equation to test whether two floating point values are equivalent:
|a - b| <= (atol + rtol * |b|)
For infinite values, isclose checks if both values are the same signed inf value.
If equal_nan is True, isclose considers NaN values in the same position to be close. Otherwise, NaN values are not considered close.
This is equivalent to
numpy.isclose
.
Condition#
- pytensor.tensor.switch(cond, ift, iff)[source]#
Returns a variable representing a switch between ift (i.e. “if true”) and iff (i.e. “if false”) based on the condition cond. This is the PyTensor equivalent of
numpy.where
.- Parameter:
cond - symbolic Tensor (or compatible)
- Parameter:
ift - symbolic Tensor (or compatible)
- Parameter:
iff - symbolic Tensor (or compatible)
- Return type:
symbolic Tensor
import pytensor.tensor as at a,b = at.dmatrices('a','b') x,y = at.dmatrices('x','y') z = at.switch(at.lt(a,b), x, y)
- pytensor.tensor.clip(x, min, max)[source]#
Return a variable representing
x
, but with all elements greater thanmax
clipped tomax
and all elements less thanmin
clipped tomin
.Normal broadcasting rules apply to each of
x
,min
, andmax
.Note that there is no warning for inputs that are the wrong way round (
min > max
), and that results in this case may differ fromnumpy.clip
.
Bit-wise#
- The bitwise operators possess this interface:
- Parameter:
a - symbolic tensor of integer type.
- Parameter:
b - symbolic tensor of integer type.
Note
The bitwise operators must have an integer type as input.
The bit-wise not (invert) takes only one parameter.
- Return type:
symbolic tensor with corresponding dtype.
Here is an example using the bit-wise and_
via the &
operator:
import pytensor.tensor as at
x,y = at.imatrices('x','y')
z = x & y
Mathematical#
- pytensor.tensor.abs(a)[source]#
Returns a variable representing the absolute of
a
, i.e.|a|
.Note
Can also be accessed using
builtins.abs
: i.e.abs(a)
.
- pytensor.tensor.angle(a)[source]#
Returns a variable representing angular component of complex-valued Tensor
a
.
- pytensor.tensor.maximum(a, b)[source]#
Returns a variable representing the maximum element by element of a and b
- pytensor.tensor.minimum(a, b)[source]#
Returns a variable representing the minimum element by element of a and b
- pytensor.tensor.reciprocal(a)[source]#
Returns a variable representing the inverse of a, ie 1.0/a. Also called reciprocal.
- pytensor.tensor.log(a), log2(a), log10(a)[source]#
Returns a variable representing the base e, 2 or 10 logarithm of a.
- pytensor.tensor.ceil(a)[source]#
Returns a variable representing the ceiling of a (for example ceil(2.1) is 3).
- pytensor.tensor.floor(a)[source]#
Returns a variable representing the floor of a (for example floor(2.9) is 2).
- pytensor.tensor.round(a, mode='half_away_from_zero')[source]
Returns a variable representing the rounding of a in the same dtype as a. Implemented rounding mode are half_away_from_zero and half_to_even.
- pytensor.tensor.iround(a, mode='half_away_from_zero')[source]#
Short hand for cast(round(a, mode),’int64’).
- pytensor.tensor.cos(a), sin(a), tan(a)[source]#
Returns a variable representing the trigonometric functions of a (cosine, sine and tangent).
- pytensor.tensor.cosh(a), sinh(a), tanh(a)[source]#
Returns a variable representing the hyperbolic trigonometric functions of a (hyperbolic cosine, sine and tangent).
- pytensor.tensor.erf(a), erfc(a)[source]#
Returns a variable representing the error function or the complementary error function. wikipedia
- pytensor.tensor.erfinv(a), erfcinv(a)[source]#
Returns a variable representing the inverse error function or the inverse complementary error function. wikipedia
- pytensor.tensor.gammaln(a)[source]#
Returns a variable representing the logarithm of the gamma function.
- pytensor.tensor.psi(a)[source]#
Returns a variable representing the derivative of the logarithm of the gamma function (also called the digamma function).
- pytensor.tensor.chi2sf(a, df)[source]#
Returns a variable representing the survival function (1-cdf — sometimes more accurate).
C code is provided in the Theano_lgpl repository. This makes it faster.
You can find more information about Broadcasting in the Broadcasting tutorial.
Linear Algebra#
- pytensor.tensor.dot(X, Y)[source]#
For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b:
- Parameters:
X (symbolic tensor) – left term
Y (symbolic tensor) – right term
- Return type:
symbolic matrix or vector
- Returns:
the inner product of
X
andY
.
- pytensor.tensor.outer(X, Y)[source]#
- Parameters:
X (symbolic vector) – left term
Y (symbolic vector) – right term
- Return type:
symbolic matrix
- Returns:
vector-vector outer product
- pytensor.tensor.tensordot(a, b, axes=2)[source]#
Given two tensors a and b,tensordot computes a generalized dot product over the provided axes. PyTensor’s implementation reduces all expressions to matrix or vector dot products and is based on code from Tijmen Tieleman’s
gnumpy
(http://www.cs.toronto.edu/~tijmen/gnumpy.html).- Parameters:
a (symbolic tensor) – the first tensor variable
b (symbolic tensor) – the second tensor variable
axes (int or array-like of length 2) –
an integer or array. If an integer, the number of axes to sum over. If an array, it must have two array elements containing the axes to sum over in each tensor.
Note that the default value of 2 is not guaranteed to work for all values of a and b, and an error will be raised if that is the case. The reason for keeping the default is to maintain the same signature as NumPy’s tensordot function (and np.tensordot raises analogous errors for non-compatible inputs).
If an integer i, it is converted to an array containing the last i dimensions of the first tensor and the first i dimensions of the second tensor:
axes = [range(a.ndim - i, b.ndim), range(i)]
If an array, its two elements must contain compatible axes of the two tensors. For example, [[1, 2], [2, 0]] means sum over the 2nd and 3rd axes of a and the 3rd and 1st axes of b. (Remember axes are zero-indexed!) The 2nd axis of a and the 3rd axis of b must have the same shape; the same is true for the 3rd axis of a and the 1st axis of b.
- Returns:
a tensor with shape equal to the concatenation of a’s shape (less any dimensions that were summed over) and b’s shape (less any dimensions that were summed over).
- Return type:
symbolic tensor
It may be helpful to consider an example to see what tensordot does. PyTensor’s implementation is identical to NumPy’s. Here a has shape (2, 3, 4) and b has shape (5, 6, 4, 3). The axes to sum over are [[1, 2], [3, 2]] – note that a.shape[1] == b.shape[3] and a.shape[2] == b.shape[2]; these axes are compatible. The resulting tensor will have shape (2, 5, 6) – the dimensions that are not being summed:
import numpy as np a = np.random.random((2,3,4)) b = np.random.random((5,6,4,3)) c = np.tensordot(a, b, [[1,2],[3,2]]) a0, a1, a2 = a.shape b0, b1, _, _ = b.shape cloop = np.zeros((a0,b0,b1)) # Loop over non-summed indices--these exist in the tensor product for i in range(a0): for j in range(b0): for k in range(b1): # Loop over summed indices--these don't exist in the tensor product for l in range(a1): for m in range(a2): cloop[i,j,k] += a[i,l,m] * b[j,k,m,l] assert np.allclose(c, cloop)
This specific implementation avoids a loop by transposing a and b such that the summed axes of a are last and the summed axes of b are first. The resulting arrays are reshaped to 2 dimensions (or left as vectors, if appropriate) and a matrix or vector dot product is taken. The result is reshaped back to the required output dimensions.
In an extreme case, no axes may be specified. The resulting tensor will have shape equal to the concatenation of the shapes of a and b:
>>> c = np.tensordot(a, b, 0) >>> a.shape (2, 3, 4) >>> b.shape (5, 6, 4, 3) >>> print(c.shape) (2, 3, 4, 5, 6, 4, 3)
- Note:
See the documentation of numpy.tensordot for more examples.
- pytensor.tensor.batched_dot(X, Y)[source]#
- Parameters:
x – A Tensor with sizes e.g.: for 3D (dim1, dim3, dim2)
y – A Tensor with sizes e.g.: for 3D (dim1, dim2, dim4)
This function computes the dot product between the two tensors, by iterating over the first dimension using scan. Returns a tensor of size e.g. if it is 3D: (dim1, dim3, dim4) Example:
>>> first = at.tensor3('first') >>> second = at.tensor3('second') >>> result = batched_dot(first, second)
- Note:
This is a subset of
numpy.einsum
, but we do not provide it for now.- Parameters:
X (symbolic tensor) – left term
Y (symbolic tensor) – right term
- Returns:
tensor of products
- pytensor.tensor.batched_tensordot(X, Y, axes=2)[source]#
- Parameters:
x – A Tensor with sizes e.g.: for 3D (dim1, dim3, dim2)
y – A Tensor with sizes e.g.: for 3D (dim1, dim2, dim4)
axes (int or array-like of length 2) –
an integer or array. If an integer, the number of axes to sum over. If an array, it must have two array elements containing the axes to sum over in each tensor.
If an integer i, it is converted to an array containing the last i dimensions of the first tensor and the first i dimensions of the second tensor (excluding the first (batch) dimension):
axes = [range(a.ndim - i, b.ndim), range(1,i+1)]
If an array, its two elements must contain compatible axes of the two tensors. For example, [[1, 2], [2, 4]] means sum over the 2nd and 3rd axes of a and the 3rd and 5th axes of b. (Remember axes are zero-indexed!) The 2nd axis of a and the 3rd axis of b must have the same shape; the same is true for the 3rd axis of a and the 5th axis of b.
- Returns:
a tensor with shape equal to the concatenation of a’s shape (less any dimensions that were summed over) and b’s shape (less first dimension and any dimensions that were summed over).
- Return type:
tensor of tensordots
A hybrid of batch_dot and tensordot, this function computes the tensordot product between the two tensors, by iterating over the first dimension using scan to perform a sequence of tensordots.
- Note:
See
tensordot()
andbatched_dot()
for supplementary documentation.
- pytensor.tensor.mgrid()[source]#
- Returns:
an instance which returns a dense (or fleshed out) mesh-grid when indexed, so that each returned argument has the same shape. The dimensions and number of the output arrays are equal to the number of indexing dimensions. If the step length is not a complex number, then the stop is not inclusive.
Example:
>>> a = at.mgrid[0:5, 0:3] >>> a[0].eval() array([[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3], [4, 4, 4]]) >>> a[1].eval() array([[0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2], [0, 1, 2]])
- pytensor.tensor.ogrid()[source]#
- Returns:
an instance which returns an open (i.e. not fleshed out) mesh-grid when indexed, so that only one dimension of each returned array is greater than 1. The dimension and number of the output arrays are equal to the number of indexing dimensions. If the step length is not a complex number, then the stop is not inclusive.
Example:
>>> b = at.ogrid[0:5, 0:3] >>> b[0].eval() array([[0], [1], [2], [3], [4]]) >>> b[1].eval() array([[0, 1, 2]])
Gradient / Differentiation#
Driver for gradient calculations.
- pytensor.gradient.grad(cost: Optional[Variable], wrt: Union[Variable, Sequence[Variable]], consider_constant: Optional[Sequence[Variable]] = None, disconnected_inputs: Literal['ignore', 'warn', 'raise'] = 'raise', add_names: bool = True, known_grads: Optional[Mapping[Variable, Variable]] = None, return_disconnected: Literal['none', 'zero', 'disconnected'] = 'zero', null_gradients: Literal['raise', 'return'] = 'raise') Union[Variable, None, Sequence[Optional[Variable]]] [source]
Return symbolic gradients of one cost with respect to one or more variables.
For more information about how automatic differentiation works in PyTensor, see
gradient
. For information on how to implement the gradient of a certain Op, seegrad()
.- Parameters:
cost – Value that we are differentiating (i.e. for which we want the gradient). May be
None
ifknown_grads
is provided.wrt – The term(s) with respect to which we want gradients.
consider_constant – Expressions not to backpropagate through.
disconnected_inputs ({'ignore', 'warn', 'raise'}) –
Defines the behaviour if some of the variables in
wrt
are not part of the computational graph computingcost
(or if all links are non-differentiable). The possible values are:'ignore'
: considers that the gradient on these parameters is zero'warn'
: consider the gradient zero, and print a warning'raise'
: raiseDisconnectedInputError
add_names – If
True
, variables generated bygrad
will be named(d<cost.name>/d<wrt.name>)
provided that bothcost
andwrt
have names.known_grads – An ordered dictionary mapping variables to their gradients. This is useful in the case where you know the gradients of some variables but do not know the original cost.
return_disconnected –
'zero'
: Ifwrt[i]
is disconnected, return valuei
will bewrt[i].zeros_like()
'none'
: Ifwrt[i]
is disconnected, return valuei
will beNone
'disconnected'
: returns variables of typeDisconnectedType
null_gradients –
Defines the behaviour when some of the variables in
wrt
have a null gradient. The possibles values are:'raise'
: raise aNullTypeGradError
exception'return'
: return the null gradients
- Returns:
A symbolic expression for the gradient of
cost
with respect to eachof the
wrt
terms. If an element ofwrt
is not differentiable withrespect to the output, then a zero variable is returned.
See the gradient page for complete documentation of the gradient module.