torch.use_deterministic_algorithms

torch.use_deterministic_algorithms(d) [source]

Sets whether PyTorch operations must use “deterministic” algorithms. That is, algorithms which, given the same input, and when run on the same software and hardware, always produce the same output. When True, operations will use deterministic algorithms when available, and if only nondeterministic algorithms are available they will throw a :class:RuntimeError when called.

Warning

This feature is in beta, and its design and implementation may change in the future.

The following normally-nondeterministic operations will act deterministically when d=True:

The following normally-nondeterministic operations will throw a RuntimeError when d=True:

A handful of CUDA operations are nondeterministic if the CUDA version is 10.2 or greater, unless the environment variable CUBLAS_WORKSPACE_CONFIG=:4096:8 or CUBLAS_WORKSPACE_CONFIG=:16:8 is set. See the CUDA documentation for more details: https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility If one of these environment variable configurations is not set, a RuntimeError will be raised from these operations when called with CUDA tensors:

Note that deterministic operations tend to have worse performance than non-deterministic operations.

Parameters

d (bool) – If True, force operations to be deterministic. If False, allow non-deterministic operations.

© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/generated/torch.use_deterministic_algorithms.html