GroupNorm

class torch.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True) [source]

Applies Group Normalization over a mini-batch of inputs as described in the paper Group Normalization

y=xE[x]Var[x]+ϵγ+βy = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta

The input channels are separated into num_groups groups, each containing num_channels / num_groups channels. The mean and standard-deviation are calculated separately over the each group. γ\gamma and β\beta are learnable per-channel affine transform parameter vectors of size num_channels if affine is True. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).

This layer uses statistics computed from input data in both training and evaluation modes.

Parameters
  • num_groups (int) – number of groups to separate the channels into
  • num_channels (int) – number of channels expected in input
  • eps – a value added to the denominator for numerical stability. Default: 1e-5
  • affine – a boolean value that when set to True, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default: True.
Shape:
  • Input: (N,C,)(N, C, *) where C=num_channelsC=\text{num\_channels}
  • Output: (N,C,)(N, C, *) (same shape as input)

Examples:

>>> input = torch.randn(20, 6, 10, 10)
>>> # Separate 6 channels into 3 groups
>>> m = nn.GroupNorm(3, 6)
>>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)
>>> m = nn.GroupNorm(6, 6)
>>> # Put all 6 channels into a single group (equivalent with LayerNorm)
>>> m = nn.GroupNorm(1, 6)
>>> # Activating the module
>>> output = m(input)

© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/generated/torch.nn.GroupNorm.html