tf.contrib.training.train

Runs the training loop.

Args
train_op A Tensor that, when executed, will apply the gradients and return the loss value.
logdir The directory where the graph and checkpoints are saved.
master The URL of the master.
is_chief Specifies whether or not the training is being run by the primary replica during replica training.
scaffold An tf.compat.v1.train.Scaffold instance.
hooks List of tf.estimator.SessionRunHook callbacks which are run inside the training loop.
chief_only_hooks List of tf.estimator.SessionRunHook instances which are run inside the training loop for the chief trainer only.
save_checkpoint_secs The frequency, in seconds, that a checkpoint is saved using a default checkpoint saver. If save_checkpoint_secs is set to None, then the default checkpoint saver isn't used.
save_summaries_steps The frequency, in number of global steps, that the summaries are written to disk using a default summary saver. If save_summaries_steps is set to None, then the default summary saver isn't used.
config An instance of tf.compat.v1.ConfigProto.
max_wait_secs Maximum time workers should wait for the session to become available. This should be kept relatively short to help detect incorrect code, but sometimes may need to be increased if the chief takes a while to start up.
run_metadata A [RunMetadata] protocol buffer.
Returns
the value of the loss function after training.
Raises
ValueError if logdir is None and either save_checkpoint_secs or save_summaries_steps are `None.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/training/train