tf.contrib.layers.rev_block

A block of reversible residual layers.

A reversible residual layer is defined as:

y1 = x1 + f(x2, f_side_input)
y2 = x2 + g(y1, g_side_input)

A reversible residual block, defined here, is a series of reversible residual layers.

Limitations:

  • f and g must not close over any Tensors; all side inputs to f and g should be passed in with f_side_input and g_side_input which will be forwarded to f and g.
  • f and g must not change the dimensionality of their inputs in order for the addition in the equations above to work.
Args
x1 a float Tensor.
x2 a float Tensor.
f a function, (Tensor) -> (Tensor) (or list of such of length num_layers). Should not change the shape of the Tensor. Can make calls to get_variable. See f_side_input if there are side inputs.
g a function, (Tensor) -> (Tensor) (or list of such of length num_layers). Should not change the shape of the Tensor. Can make calls to get_variable. See g_side_input if there are side inputs.
num_layers int, number of reversible residual layers. Each layer will apply f and g according to the equations above, with new variables in each layer.
f_side_input list of Tensors, side input to f. If not None, signature of f should be (Tensor, list) -> (Tensor).
g_side_input list of Tensors, side input to g. If not None, signature of g should be (Tensor, list) -> (Tensor).
is_training bool, whether to actually use the efficient backprop codepath.
Returns
y1, y2: tuple of float Tensors.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/rev_block