Module: tf.contrib.distribute

A distributed computation library for TF.

See tensorflow/contrib/distribute/README.md for overview and examples.

Classes

class AllReduceCrossDeviceOps: Reduction using all-reduce.

class CollectiveAllReduceStrategy: Distribution strategy that uses collective ops for all-reduce.

class CrossDeviceOps: Base class for cross-device reduction and broadcasting algorithms.

class DistributeConfig: A config tuple for distribution strategies.

class DistributionStrategy: A list of devices with a state & compute distribution policy.

class MirroredStrategy: Mirrors vars to distribute across multiple devices and machines.

class Monitor: Executes training steps, recovers and checkpoints.

class MultiWorkerAllReduce: All-reduce algorithms for distributed TensorFlow.

class OneDeviceStrategy: A distribution strategy for running on a single device.

class ParameterServerStrategy: A parameter server DistributionStrategy.

class ReplicaContext: tf.distribute.Strategy API when in a replica context.

class StandardInputStep: Step with a standard implementation of input handling.

class StandardSingleLossStep: A step function that implements a training step for a feed forward network.

class Step: Interface for performing each step of a training algorithm.

class TPUStrategy: TPU distribution strategy implementation.

class UpdateContext: Context manager when you are in update() or update_non_slot().

Functions

get_cross_replica_context(...): Returns the current tf.distribute.Strategy if in a cross-replica context.

get_distribution_strategy(...): Returns the current tf.distribute.Strategy object.

get_loss_reduction(...): tf.distribute.ReduceOp corresponding to the last loss reduction.

get_replica_context(...): Returns the current tf.distribute.ReplicaContext or None.

get_strategy(...): Returns the current tf.distribute.Strategy object.

has_distribution_strategy(...): Return if there is a current non-default tf.distribute.Strategy.

has_strategy(...): Return if there is a current non-default tf.distribute.Strategy.

in_cross_replica_context(...): Returns True if in a cross-replica context.

initialize_tpu_system(...): Initialize the TPU devices.

require_replica_context(...): Verify in replica_ctx replica context.

run_standard_tensorflow_server(...): Starts a standard TensorFlow server.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/distribute