tf.raw_ops.Relu
Computes rectified linear: max(features, 0).
tf.raw_ops.Relu(
features, name=None
)
See: https://en.wikipedia.org/wiki/Rectifier_(neural_networks) Example usage:
>>> tf.nn.relu([-2., 0., -0., 3.]).numpy() array([ 0., 0., -0., 3.], dtype=float32)
| Args | |
|---|---|
features | A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64, qint8. |
name | A name for the operation (optional). |
| Returns | |
|---|---|
A Tensor. Has the same type as features. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/raw_ops/Relu