torch.nn.utils.prune.ln_structured

torch.nn.utils.prune.ln_structured(module, name, amount, n, dim, importance_scores=None) [source]

Prunes tensor corresponding to parameter called name in module by removing the specified amount of (currently unpruned) channels along the specified dim with the lowest L``n``-norm. Modifies module in place (and also return the modified module) by: 1) adding a named buffer called name+'_mask' corresponding to the binary mask applied to the parameter name by the pruning method. 2) replacing the parameter name by its pruned version, while the original (unpruned) parameter is stored in a new parameter named name+'_orig'.

Parameters
  • module (nn.Module) – module containing the tensor to prune
  • name (str) – parameter name within module on which pruning will act.
  • amount (int or float) – quantity of parameters to prune. If float, should be between 0.0 and 1.0 and represent the fraction of parameters to prune. If int, it represents the absolute number of parameters to prune.
  • n (int, float, inf, -inf, 'fro', 'nuc') – See documentation of valid entries for argument p in torch.norm().
  • dim (int) – index of the dim along which we define channels to prune.
  • importance_scores (torch.Tensor) – tensor of importance scores (of same shape as module parameter) used to compute mask for pruning. The values in this tensor indicate the importance of the corresponding elements in the parameter being pruned. If unspecified or None, the module parameter will be used in its place.
Returns

modified (i.e. pruned) version of the input module

Return type

module (nn.Module)

Examples

>>> m = prune.ln_structured(
       nn.Conv2d(5, 3, 2), 'weight', amount=0.3, dim=1, n=float('-inf')
    )

© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.8.0/generated/torch.nn.utils.prune.ln_structured.html