tf.contrib.rnn.NASCell

Neural Architecture Search (NAS) recurrent network cell.

Inherits From: LayerRNNCell

This implements the recurrent cell from the paper:

https://arxiv.org/abs/1611.01578

Barret Zoph and Quoc V. Le. "Neural Architecture Search with Reinforcement Learning" Proc. ICLR 2017.

The class uses an optional projection layer.

Args
num_units int, The number of units in the NAS cell.
num_proj (optional) int, The output dimensionality for the projection matrices. If None, no projection is performed.
use_bias (optional) bool, If True then use biases within the cell. This is False by default.
reuse (optional) Python boolean describing whether to reuse variables in an existing scope. If not True, and the existing scope already has the given variables, an error is raised.
**kwargs Additional keyword arguments.
Attributes
graph DEPRECATED FUNCTION
output_size Integer or TensorShape: size of outputs produced by this cell.
scope_name
state_size size(s) of state(s) used by this cell.

It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.

Methods

get_initial_state

View source

zero_state

View source

Return zero-filled state tensor(s).

Args
batch_size int, float, or unit Tensor representing the batch size.
dtype the data type to use for the state.
Returns
If state_size is an int or TensorShape, then the return value is a N-D tensor of shape [batch_size, state_size] filled with zeros.

If state_size is a nested list or tuple, then the return value is a nested list or tuple (of the same structure) of 2-D tensors with the shapes [batch_size, s] for each s in state_size.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/rnn/NASCell