torch.fake_quantize_per_channel_affine

torch.fake_quantize_per_channel_affine(input, scale, zero_point, quant_min, quant_max) → Tensor

Returns a new tensor with the data in input fake quantized per channel using scale, zero_point, quant_min and quant_max, across the channel specified by axis.

output=min(quant_max,max(quant_min,std::nearby_int(input/scale)+zero_point))\text{output} = min( \text{quant\_max}, max( \text{quant\_min}, \text{std::nearby\_int}(\text{input} / \text{scale}) + \text{zero\_point} ) )
Parameters
  • input (Tensor) – the input value(s), in torch.float32.
  • scale (Tensor) – quantization scale, per channel
  • zero_point (Tensor) – quantization zero_point, per channel
  • axis (int32) – channel axis
  • quant_min (int64) – lower bound of the quantized domain
  • quant_max (int64) – upper bound of the quantized domain
Returns

A newly fake_quantized per channel tensor

Return type

Tensor

Example:

>>> x = torch.randn(2, 2, 2)
>>> x
tensor([[[-0.2525, -0.0466],
         [ 0.3491, -0.2168]],

        [[-0.5906,  1.6258],
         [ 0.6444, -0.0542]]])
>>> scales = (torch.randn(2) + 1) * 0.05
>>> scales
tensor([0.0475, 0.0486])
>>> zero_points = torch.zeros(2).to(torch.long)
>>> zero_points
tensor([0, 0])
>>> torch.fake_quantize_per_channel_affine(x, scales, zero_points, 1, 0, 255)
tensor([[[0.0000, 0.0000],
         [0.3405, 0.0000]],

        [[0.0000, 1.6134],
        [0.6323, 0.0000]]])

© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/generated/torch.fake_quantize_per_channel_affine.html