TensorFlow函数:get_collection

2020-09-07 09:59 更新
函数:tf.get_collection
get_collection(
    key,
    scope=None
)

定义在:tensorflow/python/framework/ops.py.

参见指南:构建图>图形集合

使用默认图形来包装 Graph.get_collection().

参数:

  • key:收集的关键.例如,GraphKeys 类包含许多集合的标准名称.
  • scope:(可选)如果提供,则筛选结果列表为仅包含 name 属性匹配 re.match 使用的项目.如果一个范围是提供的,并且选择或 re. match 意味着没有特殊的令牌过滤器的范围,则不会返回没有名称属性的项.

返回值:

集合中具有给定 name 的值的列表,或者如果没有值已添加到该集合中,则为空列表.该列表包含按其收集顺序排列的值.

函数:tf.get_collection_ref
get_collection_ref(key)

定义在:tensorflow/python/framework/ops.py.

参见指南:构建图>图形集合

使用默认图表来包装 Graph.get_collection_ref().

参数:

  • key:收集的关键.例如,GraphKeys 类包含许多标准的集合名称.

返回值:

集合中具有给定 name 的值的列表,或者如果没有值已添加到该集合中,则为空列表.请注意,这将返回集合列表本身,可以修改该列表来更改集合.

举个例子:

# 在'My-TensorFlow-tutorials-master/02 CIFAR10/cifar10.py'代码中

  variables = tf.get_collection(tf.GraphKeys.VARIABLES)
  for i in variables:
  print(i)

>>>   <tf.Variable 'conv1/weights:0' shape=(3, 3, 3, 96) dtype=float32_ref>
      <tf.Variable 'conv1/biases:0' shape=(96,) dtype=float32_ref>
      <tf.Variable 'conv2/weights:0' shape=(3, 3, 96, 64) dtype=float32_ref>
      <tf.Variable 'conv2/biases:0' shape=(64,) dtype=float32_ref>
      <tf.Variable 'local3/weights:0' shape=(16384, 384) dtype=float32_ref>
      <tf.Variable 'local3/biases:0' shape=(384,) dtype=float32_ref>
      <tf.Variable 'local4/weights:0' shape=(384, 192) dtype=float32_ref>
      <tf.Variable 'local4/biases:0' shape=(192,) dtype=float32_ref>
      <tf.Variable 'softmax_linear/softmax_linear:0' shape=(192, 10) dtype=float32_ref>
      <tf.Variable 'softmax_linear/biases:0' shape=(10,) dtype=float32_ref>

tf.get_collection会列出key里所有的值。

进一步地:

tf.GraphKeys 的点后可以跟很多类, 比如 VARIABLES 类(包含所有variables), 比如 REGULARIZATION_LOSSES。

具体 tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES) 的使用:

def easier_network(x, reg):
  """ A network based on tf.contrib.learn, with input `x`. """
  with tf.variable_scope('EasyNet'):
     out = layers.flatten(x)
     out = layers.fully_connected(out,
                                  num_outputs=200,
                                  weights_initializer = layers.xavier_initializer(uniform=True),
                                  weights_regularizer = layers.l2_regularizer(scale=reg),
                                  activation_fn = tf.nn.tanh)
     out = layers.fully_connected(out,
                                  num_outputs=200,
                                  weights_initializer = layers.xavier_initializer(uniform=True),
                                  weights_regularizer = layers.l2_regularizer(scale=reg),
                                  activation_fn = tf.nn.tanh)
     out = layers.fully_connected(out,
                                  num_outputs=10, # Because there are ten digits!
                                  weights_initializer = layers.xavier_initializer(uniform=True),
                                  weights_regularizer = layers.l2_regularizer(scale=reg),
                                  activation_fn = None)
     return out

 def main(_):
  mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
  x = tf.placeholder(tf.float32, [None, 784])
  y_ = tf.placeholder(tf.float32, [None, 10])

  # Make a network with regularization
  y_conv = easier_network(x, FLAGS.regu)
  weights = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'EasyNet')
  print("")
  for w in weights:
     shp = w.get_shape().as_list()
     print("- {} shape:{} size:{}".format(w.name, shp, np.prod(shp)))
     print("")
     reg_ws = tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES, 'EasyNet')
  for w in reg_ws:
     shp = w.get_shape().as_list()
     print("- {} shape:{} size:{}".format(w.name, shp, np.prod(shp)))
     print("")

  # Make the loss function `loss_fn` with regularization.
  cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
  loss_fn = cross_entropy + tf.reduce_sum(reg_ws)
  train_step = tf.train.AdamOptimizer(1e-4).minimize(loss_fn)

main()

>>>   - EasyNet/fully_connected/weights:0 shape:[784, 200] size:156800
      - EasyNet/fully_connected/biases:0 shape:[200] size:200
      - EasyNet/fully_connected_1/weights:0 shape:[200, 200] size:40000
      - EasyNet/fully_connected_1/biases:0 shape:[200] size:200
      - EasyNet/fully_connected_2/weights:0 shape:[200, 10] size:2000
      - EasyNet/fully_connected_2/biases:0 shape:[10] size:10

      - EasyNet/fully_connected/kernel/Regularizer/l2_regularizer:0 shape:[] size:1.0
      - EasyNet/fully_connected_1/kernel/Regularizer/l2_regularizer:0 shape:[] size:1.0
      - EasyNet/fully_connected_2/kernel/Regularizer/l2_regularizer:0 shape:[] size:1.0

根据下面的代码的输出可知, 在图上的所有regularization都会集中保存到tf.GraphKeys.REGULARIZATION_LOSSES去。

for w in reg_ws:
     shp = ....



以上内容是否对您有帮助:
在线笔记
App下载
App下载

扫描二维码

下载编程狮App

公众号
微信公众号

编程狮公众号