contrib.seq2seq.AttentionWrapperState
tf.contrib.seq2seq.AttentionWrapperState
class tf.contrib.seq2seq.AttentionWrapperState
Defined in tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
.
namedtuple
storing the state of a AttentionWrapper
.
Contains:
-
cell_state
: The state of the wrappedRNNCell
at the previous time step. -
attention
: The attention emitted at the previous time step. -
time
: int32 scalar containing the current time step. -
alignments
: The alignment emitted at the previous time step. -
alignment_history
: (if enabled) aTensorArray
containing alignment matrices from all time steps. Callstack()
to convert to aTensor
.
Properties
alignment_history
Alias for field number 4
alignments
Alias for field number 3
attention
Alias for field number 1
cell_state
Alias for field number 0
time
Alias for field number 2
Methods
__new__
__new__( _cls, cell_state, attention, time, alignments, alignment_history )
Create new instance of AttentionWrapperState(cell_state, attention, time, alignments, alignment_history)
clone
clone(**kwargs)
Clone this object, overriding components provided by kwargs.
Example:
initial_state = attention_wrapper.zero_state(dtype=..., batch_size=...) initial_state = initial_state.clone(cell_state=encoder_state)
Args:
**kwargs: Any properties of the state object to replace in the returned AttentionWrapperState
.
Returns:
A new AttentionWrapperState
whose properties are the same as this one, except any overriden properties as provided in kwargs
.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/AttentionWrapperState