contrib.keras.optimizers.SGD

tf.contrib.keras.optimizers.SGD

class tf.contrib.keras.optimizers.SGD

Defined in tensorflow/contrib/keras/python/keras/optimizers.py.

Stochastic gradient descent optimizer.

Includes support for momentum, learning rate decay, and Nesterov momentum.

Arguments:

lr: float >= 0. Learning rate.
momentum: float >= 0. Parameter updates momentum.
decay: float >= 0. Learning rate decay over each update.
nesterov: boolean. Whether to apply Nesterov momentum.

Methods

__init__

__init__(
    lr=0.01,
    momentum=0.0,
    decay=0.0,
    nesterov=False,
    **kwargs
)

from_config

from_config(
    cls,
    config
)

get_config

get_config()

get_gradients

get_gradients(
    loss,
    params
)

get_updates

get_updates(
    params,
    constraints,
    loss
)

get_weights

get_weights()

Returns the current value of the weights of the optimizer.

Returns:

A list of numpy arrays.

set_weights

set_weights(weights)

Sets the weights of the optimizer, from Numpy arrays.

Should only be called after computing the gradients (otherwise the optimizer has no weights).

Arguments:

weights: a list of Numpy arrays. The number
    of arrays and their shape must match
    number of the dimensions of the weights
    of the optimizer (i.e. it should match the
    output of `get_weights`).

Raises:

ValueError: in case of incompatible weight shapes.

© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/keras/optimizers/SGD

在线笔记
App下载
App下载

扫描二维码

下载编程狮App

公众号
微信公众号

编程狮公众号

意见反馈
返回顶部