tf.raw_ops.QuantizedMatMulWithBiasAndRequantize

Args
a A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
b A Tensor. Must be one of the following types: qint8, quint8, qint32, qint16, quint16.
bias A Tensor. Must be one of the following types: float32, qint32.
min_a A Tensor of type float32.
max_a A Tensor of type float32.
min_b A Tensor of type float32.
max_b A Tensor of type float32.
min_freezed_output A Tensor of type float32.
max_freezed_output A Tensor of type float32.
Toutput An optional tf.DType from: tf.qint8, tf.quint8, tf.qint32, tf.qint16, tf.quint16. Defaults to tf.quint8.
transpose_a An optional bool. Defaults to False.
transpose_b An optional bool. Defaults to False.
input_quant_mode An optional string from: "MIN_FIRST", "SCALED". Defaults to "MIN_FIRST".
name A name for the operation (optional).
Returns
A tuple of Tensor objects (out, min_out, max_out).
out A Tensor of type Toutput.
min_out A Tensor of type float32.
max_out A Tensor of type float32.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/raw_ops/QuantizedMatMulWithBiasAndRequantize