torch.arange

torch.arange(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor

Returns a 1-D tensor of size endstartstep\left\lceil \frac{\text{end} - \text{start}}{\text{step}} \right\rceil with values from the interval [start, end) taken with common difference step beginning from start.

Note that non-integer step is subject to floating point rounding errors when comparing against end; to avoid inconsistency, we advise adding a small epsilon to end in such cases.

outi+1=outi+step\text{out}_{{i+1}} = \text{out}_{i} + \text{step}
Parameters
  • start (Number) – the starting value for the set of points. Default: 0.
  • end (Number) – the ending value for the set of points
  • step (Number) – the gap between each pair of adjacent points. Default: 1.
Keyword Arguments
  • out (Tensor, optional) – the output tensor.
  • dtype (torch.dtype, optional) – the desired data type of returned tensor. Default: if None, uses a global default (see torch.set_default_tensor_type()). If dtype is not given, infer the data type from the other input arguments. If any of start, end, or stop are floating-point, the dtype is inferred to be the default dtype, see get_default_dtype(). Otherwise, the dtype is inferred to be torch.int64.
  • layout (torch.layout, optional) – the desired layout of returned Tensor. Default: torch.strided.
  • device (torch.device, optional) – the desired device of returned tensor. Default: if None, uses the current device for the default tensor type (see torch.set_default_tensor_type()). device will be the CPU for CPU tensor types and the current CUDA device for CUDA tensor types.
  • requires_grad (bool, optional) – If autograd should record operations on the returned tensor. Default: False.

Example:

>>> torch.arange(5)
tensor([ 0,  1,  2,  3,  4])
>>> torch.arange(1, 4)
tensor([ 1,  2,  3])
>>> torch.arange(1, 2.5, 0.5)
tensor([ 1.0000,  1.5000,  2.0000])

© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/generated/torch.arange.html