tensorflow::ops::ApplyAdam
#include <training_ops.h>
Update '*var' according to the Adam algorithm.
Summary
 $$lr_t := {learning_rate} * {1 - beta_2^t} / (1 - beta_1^t)$$ 
  $$m_t := beta_1 * m_{t-1} + (1 - beta_1) * g$$ 
  $$v_t := beta_2 * v_{t-1} + (1 - beta_2) * g * g$$ 
  $$variable := variable - lr_t * m_t / ({v_t} + )$$ 
 Arguments:
- scope: A Scope object
 - var: Should be from a Variable().
 - m: Should be from a Variable().
 - v: Should be from a Variable().
 - beta1_power: Must be a scalar.
 - beta2_power: Must be a scalar.
 - lr: Scaling factor. Must be a scalar.
 - beta1: Momentum factor. Must be a scalar.
 - beta2: Momentum factor. Must be a scalar.
 - epsilon: Ridge term. Must be a scalar.
 - grad: The gradient.
 
Optional attributes (see Attrs):
- use_locking: If 
True, updating of the var, m, and v tensors will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention. - use_nesterov: If 
True, uses the nesterov update. 
Returns:
- 
Output: Same as "var". 
| Constructors and Destructors | |
|---|---|
 ApplyAdam(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad)  |  |
 ApplyAdam(const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad, const ApplyAdam::Attrs & attrs)  |  
| Public attributes | |
|---|---|
 operation  |  |
 out  |  |
| Public functions | |
|---|---|
 node() const   |   ::tensorflow::Node *  |  
 operator::tensorflow::Input() const   |  |
 operator::tensorflow::Output() const   |  |
| Public static functions | |
|---|---|
 UseLocking(bool x)  |  |
 UseNesterov(bool x)  |  |
| Structs | |
|---|---|
| tensorflow::ops::ApplyAdam::Attrs |   Optional attribute setters for ApplyAdam.  |  
Public attributes
operation
Operation operation
out
::tensorflow::Output out
Public functions
ApplyAdam
ApplyAdam( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad )
ApplyAdam
ApplyAdam( const ::tensorflow::Scope & scope, ::tensorflow::Input var, ::tensorflow::Input m, ::tensorflow::Input v, ::tensorflow::Input beta1_power, ::tensorflow::Input beta2_power, ::tensorflow::Input lr, ::tensorflow::Input beta1, ::tensorflow::Input beta2, ::tensorflow::Input epsilon, ::tensorflow::Input grad, const ApplyAdam::Attrs & attrs )
node
::tensorflow::Node * node() const
operator::tensorflow::Input
operator::tensorflow::Input() const
operator::tensorflow::Output
operator::tensorflow::Output() const
Public static functions
UseLocking
Attrs UseLocking( bool x )
UseNesterov
Attrs UseNesterov( bool x )
    © 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
    https://www.tensorflow.org/versions/r2.4/api_docs/cc/class/tensorflow/ops/apply-adam