tf.stop_gradient

tf.stop_gradient

tf.stop_gradient

stop_gradient(
    input,
    name=None
)

Defined in tensorflow/python/ops/gen_array_ops.py.

See the guide: Training > Gradient Computation

Stops gradient computation.

When executed in a graph, this op outputs its input tensor as-is.

When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified 'loss' by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients.

This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. Some examples include:

  • The EM algorithm where the M-step should not involve backpropagation through the output of the E-step.
  • Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model.
  • Adversarial training, where no backprop should happen through the adversarial example generation process.

Args:

  • input: A Tensor.
  • name: A name for the operation (optional).

Returns:

A Tensor. Has the same type as input.

© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/stop_gradient

在线笔记
App下载
App下载

扫描二维码

下载编程狮App

公众号
微信公众号

编程狮公众号

意见反馈
返回顶部