tf.compat.v2.keras.layers.GRU

Gated Recurrent Unit - Cho et al. 2014.

Inherits From: GRU

Based on available runtime hardware and constraints, this layer will choose different implementations (cuDNN-based or pure-TensorFlow) to maximize the performance. If a GPU is available and all the arguments to the layer meet the requirement of the CuDNN kernel (see below for details), the layer will use a fast cuDNN implementation.

The requirements to use the cuDNN implementation are:

  1. activation == 'tanh'
  2. recurrent_activation == 'sigmoid'
  3. recurrent_dropout == 0
  4. unroll is False
  5. use_bias is True
  6. reset_after is True
  7. Inputs are not masked or strictly right padded.

There are two variants of the GRU implementation. The default one is based on v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original and has the order reversed.

The second variant is compatible with CuDNNGRU (GPU-only) and allows inference on CPU. Thus it has separate biases for kernel and recurrent_kernel. To use this variant, set 'reset_after'=True and recurrent_activation='sigmoid'.

Arguments
units Positive integer, dimensionality of the output space.
activation Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
recurrent_activation Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x).
use_bias Boolean, whether the layer uses a bias vector.
kernel_initializer Initializer for the kernel weights matrix, used for the linear transformation of the inputs.
recurrent_initializer Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state.
bias_initializer Initializer for the bias vector.
kernel_regularizer Regularizer function applied to the kernel weights matrix.
recurrent_regularizer Regularizer function applied to the recurrent_kernel weights matrix.
bias_regularizer Regularizer function applied to the bias vector.
activity_regularizer Regularizer function applied to the output of the layer (its "activation")..
kernel_constraint Constraint function applied to the kernel weights matrix.
recurrent_constraint Constraint function applied to the recurrent_kernel weights matrix.
bias_constraint Constraint function applied to the bias vector.
dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs.
recurrent_dropout Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state.
implementation Implementation mode, either 1 or 2. Mode 1 will structure its operations as a larger number of smaller dot products and additions, whereas mode 2 will batch them into fewer, larger operations. These modes will have different performance profiles on different hardware and for different applications.
return_sequences Boolean. Whether to return the last output in the output sequence, or the full sequence.
return_state Boolean. Whether to return the last state in addition to the output.
go_backwards Boolean (default False). If True, process the input sequence backwards and return the reversed sequence.
stateful Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
unroll Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
reset_after GRU convention (whether to apply reset gate after or before matrix multiplication). False = "before", True = "after" (default and CuDNN compatible).

Call arguments:

  • inputs: A 3D tensor.
  • mask: Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked (optional, defaults to None).
  • training: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used (optional, defaults to None).
  • initial_state: List of initial state tensors to be passed to the first call of the cell (optional, defaults to None which causes creation of zero-filled initial state tensors).
Attributes
activation
bias_constraint
bias_initializer
bias_regularizer
dropout
implementation
kernel_constraint
kernel_initializer
kernel_regularizer
recurrent_activation
recurrent_constraint
recurrent_dropout
recurrent_initializer
recurrent_regularizer
reset_after
states
units
use_bias

Methods

get_dropout_mask_for_cell

View source

Get the dropout mask for RNN cell's input.

It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell.

Args
inputs the input tensor whose shape will be used to generate dropout mask.
training boolean tensor, whether its in training mode, dropout will be ignored in non-training mode.
count int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together.
Returns
List of mask tensor, generated or cached mask based on context.

get_initial_state

View source

get_recurrent_dropout_mask_for_cell

View source

Get the recurrent dropout mask for RNN cell.

It will create mask based on context if there isn't any existing cached mask. If a new mask is generated, it will update the cache in the cell.

Args
inputs the input tensor whose shape will be used to generate dropout mask.
training boolean tensor, whether its in training mode, dropout will be ignored in non-training mode.
count int, how many dropout mask will be generated. It is useful for cell that has internal weights fused together.
Returns
List of mask tensor, generated or cached mask based on context.

reset_dropout_mask

View source

Reset the cached dropout masks if any.

This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch.

reset_recurrent_dropout_mask

View source

Reset the cached recurrent dropout masks if any.

This is important for the RNN layer to invoke this in it call() method so that the cached mask is cleared before calling the cell.call(). The mask should be cached across the timestep within the same batch, but shouldn't be cached between batches. Otherwise it will introduce unreasonable bias against certain index of data within the batch.

reset_states

View source

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/compat/v2/keras/layers/GRU