tf.distribute.experimental.CommunicationImplementation
Cross device communication implementation.
-
AUTO: Automatically chosen by Tensorflow. -
RING: TensorFlow's ring algorithms for all-reduce and all-gather. -
NCCL: NVIDIA®'s NCCL library. This is now only used for all-reduce on GPUs; all-reduce on CPU, all-gather and broadcast fallbacks to RING.
| Class Variables | |
|---|---|
| AUTO | tf.distribute.experimental.CommunicationImplementation |
| NCCL | tf.distribute.experimental.CommunicationImplementation |
| RING | tf.distribute.experimental.CommunicationImplementation |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/distribute/experimental/CommunicationImplementation