tf.fake_quant_with_min_max_vars_per_channel_gradient
tf.fake_quant_with_min_max_vars_per_channel_gradient
tf.fake_quant_with_min_max_vars_per_channel_gradient
fake_quant_with_min_max_vars_per_channel_gradient( gradients, inputs, min, max, num_bits=None, name=None )
Defined in tensorflow/python/ops/gen_array_ops.py
.
See the guide: Tensor Transformations > Fake quantization
Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation.
Args:
-
gradients
: ATensor
of typefloat32
. Backpropagated gradients above the FakeQuantWithMinMaxVars operation, shape one of:[d]
,[b, d]
,[b, h, w, d]
. -
inputs
: ATensor
of typefloat32
. Values passed as inputs to the FakeQuantWithMinMaxVars operation, shape same asgradients
. min, max: Quantization interval, floats of shape[d]
. -
min
: ATensor
of typefloat32
. -
max
: ATensor
of typefloat32
. -
num_bits
: An optionalint
. Defaults to8
. The bitwidth of the quantization; between 2 and 8, inclusive. -
name
: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (backprops_wrt_input, backprop_wrt_min, backprop_wrt_max).
-
backprops_wrt_input
: ATensor
of typefloat32
. Backpropagated gradients w.r.t. inputs, shape same asinputs
:gradients * (inputs >= min && inputs <= max)
. -
backprop_wrt_min
: ATensor
of typefloat32
. Backpropagated gradients w.r.t. min parameter, shape[d]
:sum_per_d(gradients * (inputs < min))
. -
backprop_wrt_max
: ATensor
of typefloat32
. Backpropagated gradients w.r.t. max parameter, shape[d]
:sum_per_d(gradients * (inputs > max))
.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/fake_quant_with_min_max_vars_per_channel_gradient