TensorFlow-Slim(TF-Slim) 是 2016 年开源库,主要用于"代码瘦身",便于模型定义,并给出了一些图像分析模型. TF-Slim 是用于 TensorFlow 复杂模型的定义、训练和评估的轻量库.

[tensorflow/contrib/slim]
[tensorflow/models/tree/master/research/slim/]

模块导入:

import tensorflow.contrib.slim as slim

<h2>1. TF-Slim 特点</h2>

TF-Slim 用于神经网络的构建、训练和评估:

  • 去除样板代码(boilerplate code),便于更紧凑的定义模型.
  • 利用通用正则子(regularizers), 更易于开发模型.
  • 提供部分常用视觉模型(如,VGGNet,AlexNet 等).
  • 更易于复杂模型的扩展,采用已有模型断点,进行模型训练算法的初始化.

<h2>2. TF-Slim 组成</h2>

TF-Slim 由独立的几个部分组成,主要有:

<h2>3. TF-Slim 定义模型</h2>

TF-Slim 通过结合 variables, layers 和 scopes 进行模型定义.

<h3>3.1 Variables</h3>

TensorFlow 的原始变量 Variables 定义需要预定义值或者初始化方法(如,Gaissian 随机采样).
而且,如果需要指定变量创建所在指定设备,如GPU,还需要显式指定.

TF-Slim 提供了更轻量的变量封装函数 - variables.py.

例如,创建 weight 变量,并采用截断正态分布(truncated normal distribution) 初始化,l2_loss 正则化,CPU 上,

weights = slim.variable('weights',
                             shape=[10, 10, 3 , 3],
                             initializer=tf.truncated_normal_initializer(stddev=0.1),
                             regularizer=slim.l2_regularizer(0.05),
                             device='/CPU:0')

TensorFlow 原始实现中,有两种类型的变量:regular variables 和 local (transient) variables.
大部分变量都是 regular variables,其一旦创建,都可以采用 saver 保存到磁盘.
local variables 的生存周期只是在会话session 期间,不被保存到磁盘.

TF-Slim 通过定义模型变量,进一步对变量区分. 模型变量表示了模型的参数(model variables).
网络学习时,模型变量进行训练或 fine-tuned;
模型评估和推断时,模型变量从断点checkpoint 进行加载.
例如,slim.fully_connectedslim.conv2d 创建的变量.

非模型变量(non-model variables) 是网络学习或评估时时的所有其它模型变量,但真实推断不需要的变量.
例如,global_step 是网络学习是网络学习和评估时使用的变量,但并不是模型的一部分.
类似的,滑动平均(moving average)变量可能反映了模型变量,但并不是模型自身的变量.

采用 TF-Slim 可以很简单的创建与检索模型变量(model variables) 和正则变量(regular variables):

# Model Variables
weights = slim.model_variable('weights',
                              shape=[10, 10, 3 , 3],
                              initializer=tf.truncated_normal_initializer(stddev=0.1),
                              regularizer=slim.l2_regularizer(0.05),
                              device='/CPU:0')
model_variables = slim.get_model_variables()

# Regular variables
my_var = slim.variable('my_var',
                       shape=[20, 1],
                       initializer=tf.zeros_initializer())
regular_variables_and_model_variables = slim.get_variables()

这是怎么实现的呢?
当采用 TF-Slime 的网络层layers 创建了一个模型变量,或者直接采用 slim.model_variable 函数创建模型变量,TF-Slim 将变量添加到 tf.GraphKeys.MODEL_VARIABLES 集合(collection)中.

如果需要自定义网络层或变量创建方法,仍想 TF-Slim 来管理模型变量呢?
TF-Slim 提供了相应的函数,来添加模型变量到集合:

my_model_variable = CreateViaCustomCode()

# Letting TF-Slim know about the additional variable.
slim.add_model_variable(my_model_variable)

<h3>3.2 Layers</h3>

TensorFlow Ops 是非常广泛的,神经网络开发者对于模型是比较高层的概念,如 Layers,Losses, Metrics,Networks.

网络层(Layer),如卷积层ConvLayer,全连接层FCLayer,BatchNormLayer,比 TensorFlow Ops 更抽象,且一般涉及多个 Ops.
此外,网络层通常(不是一直,usually but not always) 包含与之对应的变量(可调参数, tunable parameters),不像一些基础的 Ops.
例如,ConvLayer 由几个底层 Ops 组成:

  • 创建权重和偏置变量;
  • 计算权重与前层网络层输入的卷积;
  • 添加偏置到卷积结果;
  • 采用激活函数处理.

ConvLayer 基于原始 TensorFlow 实现,相当繁琐:

input = ...
with tf.name_scope('conv1_1') as scope:
  kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,
                                           stddev=1e-1), name='weights')
  conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
  biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),
                       trainable=True, name='biases')
  bias = tf.nn.bias_add(conv, biases)
  conv1 = tf.nn.relu(bias, name=scope)

为了减少代码的重复,TF-Slim 提供了很多方便的 Ops,更抽象的定义网络层.
如,ConvLayer 基于 TF-Slim:

input = ...
net = slim.conv2d(input, 128, [3, 3], scope='conv1_1')

TF-Slim 提供了很多网络构建的标准网络层:

Layer TF-Slim
BiasAdd slim.bias_add
BatchNorm slim.batch_norm
Conv2d slim.conv2d
Conv2dInPlace slim.conv2d_in_plane
Conv2dTranspose(Deconv) slim.conv2d_transpose
FullyConnected slim.fully_connected
AvgPool2D slim.avg_pool2d
Dropout slim.dropout
Flatten slim.flatten
MaxPool2D slim.max_pool2d
OneHotEncoding slim.one_hot_encoding
SeparableConv2 slim.separable_conv2d
UnitNorm slim.unit_norm

TF-Slim 还提供了两个 meta-operations:repeatstack,用于重复地进行相同操作.
例如,VGG 网络的一部分:

net = ...
net = slim.conv2d(net, 256, [3, 3], scope='conv3_1')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_2')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')

采用 for 循环可以减少代码的重复:

net = ...
for i in range(3):
    net = slim.conv2d(net, 256, [3, 3], scope='conv3_%d' % (i+1))
net = slim.max_pool2d(net, [2, 2], scope='pool2')

采用 TF-Slim 的 repeat 操作,可以更干净:

net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')

slim.repeat 不仅应用相同的内嵌参数,还能智能地展开作用域,每个 slim.conv2d 连续调用的作用域会自动添加 underscore 和迭代数.
更具体来说,上面例子的作用域会自动命名为 conv3/conv3_1, conv3/conv3_2conv3/conv3_3.

TF-Slim 的 slim.stack 操作,重复地进行参数不同的相同操作,来创建网络层堆积.
slim.stack 对于每次创建的操作,创建新的 tf.variable_scope.
例如,多层感知机(MLP) 的创建的一种简单方式为:

# 冗长方式:
x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope='fc/fc_2')
x = slim.fully_connected(x, 128, scope='fc/fc_3')

# 等价方式, TF-Slim way using slim.stack:
slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')

该例中,slim.stack 调用了三次 slim.fully_connected,传递函数的输出到下一次调用.
然而,每次调用的隐神经元数量分别是 32, 64, 128.

类似地,多卷积层的堆积:

# Verbose way:
x = slim.conv2d(x, 32, [3, 3], scope='core/core_1')
x = slim.conv2d(x, 32, [1, 1], scope='core/core_2')
x = slim.conv2d(x, 64, [3, 3], scope='core/core_3')
x = slim.conv2d(x, 64, [1, 1], scope='core/core_4')

# Using stack:
slim.stack(x, slim.conv2d, [(32, [3, 3]), (32, [1, 1]), (64, [3, 3]), (64, [1, 1])], scope='core')

<h3>3.3 Scopes</h3>

除了 TensorFlow 中的作用域类型(scope type) - name_scopevariable_scope,TF-Slim 新增了一个作用域类型 - arg_scope.

arg_scope 用于指定一个或多个 Ops,以及指定传递到在 arg_scope 定义的每个 Op 的参数集.

例如:

net = slim.conv2d(inputs, 64, [11, 11], 4, padding='SAME',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv1')
net = slim.conv2d(net, 128, [11, 11], padding='VALID',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv2')
net = slim.conv2d(net, 256, [11, 11], padding='SAME',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv3')

三个 ConvLayers 共享相同的超参数,其中两个具有相同的 padding,三个都是相同的 weights_initializer 和 weight_regularizer.
该代码不易读,且包含很多可以分解出来的重复值.
一种解决方法是采用变量来指定默认值:

padding = 'SAME'
initializer = tf.truncated_normal_initializer(stddev=0.01)
regularizer = slim.l2_regularizer(0.0005)
net = slim.conv2d(inputs, 64, [11, 11], 4,
                  padding=padding,
                  weights_initializer=initializer,
                  weights_regularizer=regularizer,
                  scope='conv1')
net = slim.conv2d(net, 128, [11, 11],
                  padding='VALID',
                  weights_initializer=initializer,
                  weights_regularizer=regularizer,
                  scope='conv2')
net = slim.conv2d(net, 256, [11, 11],
                  padding=padding,
                  weights_initializer=initializer,
                  weights_regularizer=regularizer,
                  scope='conv3')

该方式可以保证三个 ConvLayer 共享相同的参数,但并没有完全减少代码冗余.

采用 arg_scope,可以同时保证每一网络层使用相同的值,并简单代码:

  with slim.arg_scope([slim.conv2d], padding='SAME',
                      weights_initializer=tf.truncated_normal_initializer(stddev=0.01)
                      weights_regularizer=slim.l2_regularizer(0.0005)):
        net = slim.conv2d(inputs, 64, [11, 11], scope='conv1')
        net = slim.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2')
        net = slim.conv2d(net, 256, [11, 11], scope='conv3')

arg_scope 可以使代码更整洁,更简单,更易于维护.
虽然参数是在 arg_scope 中指定参数值,但也可以局部复写.
如 padding 参数已经设为 ’SAME’,但第二个 ConvLayer 复写了其值为 ‘VALID’.

另外,也可以内嵌 arg_scope,在相同作用域采用多次 Ops. 例如:

with slim.arg_scope([slim.conv2d, slim.fully_connected],
                      activation_fn=tf.nn.relu,
                      weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                      weights_regularizer=slim.l2_regularizer(0.0005)):
    with slim.arg_scope([slim.conv2d], stride=1, padding='SAME'):
        net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')
        net = slim.conv2d(net, 256, [5, 5],
                      weights_initializer=tf.truncated_normal_initializer(stddev=0.03),
                      scope='conv2')
        net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc')

该例中,第一个 arg_scope 对 conv2d 和 fully_connected 采用了相同的 weights_initializer 和 weights_regularizer 参数.
第二个 arg_scope,只指定了 conv2d 的默认参数.

<h3>3.4 VGG16 网络层示例</h3>

利用 TF-Slim 的 Variables,Operation 和 Scopes,创建 VGG16 网络:

def vgg16(inputs):
  with slim.arg_scope([slim.conv2d, slim.fully_connected],
                      activation_fn=tf.nn.relu,
                      weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
                      weights_regularizer=slim.l2_regularizer(0.0005)):
    net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')
    net = slim.max_pool2d(net, [2, 2], scope='pool1')
    net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')
    net = slim.max_pool2d(net, [2, 2], scope='pool2')
    net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
    net = slim.max_pool2d(net, [2, 2], scope='pool3')
    net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')
    net = slim.max_pool2d(net, [2, 2], scope='pool4')
    net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')
    net = slim.max_pool2d(net, [2, 2], scope='pool5')
    net = slim.fully_connected(net, 4096, scope='fc6')
    net = slim.dropout(net, 0.5, scope='dropout6')
    net = slim.fully_connected(net, 4096, scope='fc7')
    net = slim.dropout(net, 0.5, scope='dropout7')
    net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')
  return net

<h2>4. 模型训练</h2>

TensorFlow 模型训练需要模型Model,Loss 函数,梯度计算和迭代的计算模型权重相对于 loss 的梯度和对应的权重更新的训练方案.

TF-Slim 提供了常用 loss 函数和 helper 函数,以进行模型训练和评估.

<h3>4.1 Losses</h3>

Loss 函数定义了需要最小化的目标.
对于分类问题,一般采用类别的真实概率分布和预测概率分布间的交叉熵.
对于回归问题,一般采用真实值和预测值间的平方差的总和.

某些模型,如 multi-task learning 模型,需要同时采用多个 loss. 也就是说,最终的 loss 函数是不同 loss 函数之和的最小化.
例如,同时预测图片中的场景类型和每个像素的相机深度,模型的 loss 函数是分类 loss 和深度预测 loss 的二者之和.

TF-Slim 提供了一种易用的 loss 函数定义机制,采用了 losses 模块.
例如,VGG 网络的训练:

import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
vgg = nets.vgg

# Load the images and labels.
images, labels = ...

# Create the model.
predictions, _ = vgg.vgg_16(images)

# Define the loss functions and get the total loss.
loss = slim.losses.softmax_cross_entropy(predictions, labels)

该例中,先创建模型(采用 TF-Slim 的 VGG 实现),然后添加标准的分类 loss.

现在,针对 multi-task 模型的情况,模型有多个输出.

# Load the images and labels.
images, scene_labels, depth_labels = ...

# Create the model.
scene_predictions, depth_predictions = CreateMultiTaskModel(images)

# Define the loss functions and get the total loss.
classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels)
sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels)

# The following two lines have the same effect:
total_loss = classification_loss + sum_of_squares_loss
total_loss = slim.losses.get_total_loss(add_regularization_losses=False)

该例中,有两个 losses:slim.losses.softmax_cross_entropyslim.losses.sum_of_squares.
通过二者相加,可以得到总 loss;或者采用 slim.losses.get_total_loss 得到总 loss.
其工作原理是,当采用 TF-Slim 来创建 loss 函数时,TF-Slim 将 loss 添加到 loss 函数特定 TensorFlow 集合(collecition)中. 这样,可以手工管理总 loss,或者采用 TF-Slim 来管理.

如果自定义 loss 函数,如何采用 TF-Slim 来管理 losses 呢?
loss_ops.py 提供了添加自定义 loss 到 TF-Slim 集合中的函数.
例如:

# Load the images and labels.
images, scene_labels, depth_labels, pose_labels = ...

# Create the model.
scene_predictions, depth_predictions, pose_predictions = CreateMultiTaskModel(images)

# Define the loss functions and get the total loss.
classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels)
sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels)
pose_loss = MyCustomLossFunction(pose_predictions, pose_labels)
slim.losses.add_loss(pose_loss) # Letting TF-Slim know about the additional loss.

# The following two ways to compute the total loss are equivalent:
regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = classification_loss + sum_of_squares_loss + pose_loss + regularization_loss

# (Regularization Loss is included in the total loss by default).
total_loss2 = slim.losses.get_total_loss()

<h3>4.2 训练循环</h3>

TF-Slim 提供了简单有效的模型训练工具 - learning.py.
主要包括 Train 函数,其重复地计算 loss,计算梯度,并保存模型到磁盘,以及控制梯度的辅助函数.

例如,当模型、loss 函数和优化策略定义完成后,即可调用 slim.learning.create_train_opslim.learning.train 进行优化训练.

g = tf.Graph()

# Create the model and specify the losses...
...

total_loss = slim.losses.get_total_loss()
optimizer = tf.train.GradientDescentOptimizer(learning_rate)

# create_train_op ensures that each time we ask for the loss, the update_ops
# are run and the gradients being computed are applied too.
train_op = slim.learning.create_train_op(total_loss, optimizer)
logdir = ... # Where checkpoints are stored.

slim.learning.train(
    train_op,
    logdir,
    number_of_steps=1000,
    save_summaries_secs=300,
    save_interval_secs=600):

该例中,slim.learning.train 中:

  • train_op - 用于计算loss 和应用梯度计算.
  • logdir - 指定了断点和日志事件文件的保存路径.
  • numer_of_steps - 梯度计算的 step 次数.
  • save_summaries_secs - 表示每 300/60=5 分钟计算一次 summary.
  • save_interval_secs - 表示每 600/60=10 分钟保存一次断点模型.

<h3>4.3 VGG 模型训练示例</h3>

import tensorflow as tf
import tensorflow.contrib.slim.nets as nets

slim = tf.contrib.slim
vgg = nets.vgg

...

train_log_dir = ...
if not tf.gfile.Exists(train_log_dir):
    tf.gfile.MakeDirs(train_log_dir)

with tf.Graph().as_default():
  # Set up the data loading:
  images, labels = ...

  # Define the model:
  predictions = vgg.vgg_16(images, is_training=True)

  # Specify the loss function:
  slim.losses.softmax_cross_entropy(predictions, labels)

  total_loss = slim.losses.get_total_loss()
  tf.summary.scalar('losses/total_loss', total_loss)

  # Specify the optimization scheme:
  optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)

  # create_train_op that ensures that when we evaluate it to get the loss,
  # the update_ops are done and the gradient updates are computed.
  train_tensor = slim.learning.create_train_op(total_loss, optimizer)

  # Actually runs training.
  slim.learning.train(train_tensor, train_log_dir)

<h2>5. 模型 fine-tuning</h2>

<h3>5.1 简单回顾 - 从断点文件恢复模型变量</h3>

模型训练后,可以采用 tf.train.Saver() 来从特点断点文件恢复 Variables.
很多情况下,tf.train.Saver() 提供了恢复所有或者部分变量的简单机制.

# Create some variables.
v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...
# Add ops to restore all the variables.
restorer = tf.train.Saver()

# Add ops to restore some variables.
restorer = tf.train.Saver([v1, v2])

# Later, launch the model, use the saver to restore variables from disk, and
# do some work with the model.
with tf.Session() as sess:
    # Restore variables from disk.
    restorer.restore(sess, "/tmp/model.ckpt")
    print("Model restored.")
    # Do some work with the model
    ...

更多细节可以参考:Restoring VariablesChoosing which Variables to Save and Restore

<h3>5.2 部分恢复模型</h3>

在新的数据集和新任务的情况下,往往需要采用在 pre-trained 模型上 fine-tune.
可以采用 TF-Slim 的 helper 函数来选择模型变量的子集来恢复:

# Create some variables.
v1 = slim.variable(name="v1", ...)
v2 = slim.variable(name="nested/v2", ...)
...

# Get list of variables to restore (which contains only 'v2'). These are all
# equivalent methods:
variables_to_restore = slim.get_variables_by_name("v2")
# or
variables_to_restore = slim.get_variables_by_suffix("2")
# or
variables_to_restore = slim.get_variables(scope="nested")
# or
variables_to_restore = slim.get_variables_to_restore(include=["nested"])
# or
variables_to_restore = slim.get_variables_to_restore(exclude=["v1"])

# Create the saver which will be used to restore the variables.
restorer = tf.train.Saver(variables_to_restore)

with tf.Session() as sess:
    # Restore variables from disk.
    restorer.restore(sess, "/tmp/model.ckpt")
    print("Model restored.")
    # Do some work with the model
    ...

<h3>5.3 恢复不同变量名的模型</h3>

当从断点文件恢复变量时, Saver 先确定在断点文件中的变量名,然后映射变量到当前图Graph 中.

上面中,创建 saver 来传递变量. 这里,从每个提供的变量的 var.op.name来获得断点文件中确定的变量名.
当断点文件中的变量与图Graph 中的变量名相匹配时,比较有效.
但,某些时候,断点模型中的变量与当前图Graph 中的变量名不同时,恢复模型.
此时,必须提供 Saver 字典,以将每个断点文件变量名和每个图Graph 变量进行映射.

例如:

# Assuming than 'conv1/weights' should be restored from 'vgg16/conv1/weights'
def name_in_checkpoint(var):
    return 'vgg16/' + var.op.name

# Assuming than 'conv1/weights' and 'conv1/bias' should be restored from 'conv1/params1' and 'conv1/params2'
def name_in_checkpoint(var):
    if "weights" in var.op.name:
        return var.op.name.replace("weights", "params1")
    if "bias" in var.op.name:
        return var.op.name.replace("bias", "params2")

variables_to_restore = slim.get_model_variables()
variables_to_restore = {name_in_checkpoint(var):var for var in variables_to_restore}
restorer = tf.train.Saver(variables_to_restore)

with tf.Session() as sess:
  # Restore variables from disk.
  restorer.restore(sess, "/tmp/model.ckpt")

<h3>5.4 在不同任务 Fine-tuning 模型</h3>

假如,已经有预训练的 VGG16 模型,其是在 ImageNet 数据集上训练得到,1000 类的分类模型.
当对 Pascal VOC 数据集应用时,该数据集只有 20 个类别.

此种情况,可以采用预训练模型初始化模型训练,但排除最后一网络层:

# Load the Pascal VOC data
image, label = MyPascalVocDataLoader(...)
images, labels = tf.train.batch([image, label], batch_size=32)

# Create the model
predictions = vgg.vgg_16(images)

train_op = slim.learning.create_train_op(...)

# Specify where the Model, trained on ImageNet, was saved.
model_path = '/path/to/pre_trained_on_imagenet.checkpoint'

# Specify where the new model will live:
log_dir = '/path/to/my_pascal_model_dir/'

# Restore only the convolutional layers:
variables_to_restore = slim.get_variables_to_restore(exclude=['fc6', 'fc7', 'fc8'])
init_fn = assign_from_checkpoint_fn(model_path, variables_to_restore)

# Start training.
slim.learning.train(train_op, log_dir, init_fn=init_fn)

<h2>6. 模型评估</h2>

当模型训练后,往往需要评估模型的实际表现.
一般是通过选择评估度量标准,评测模型表现.
评估代码一般包括数据加载,推断计算,对比预测结果和 groundtruth 的差异,记录评估分数.

<h3>6.1 Metrics</h3>

定义 metric 来评估模型表现,但不是 loss 函数(loss 一般是在训练时直接优化).
例如,最小化 log loss,但 metic 更关注 F1 score,或者 IoU score(IoU 不可微,不能用于 losses).

TF-Slim 提供了很多 metric Ops,以易于评估模型.
metric 的计算一般可分为三部分:

  • 初始化 - 初始化变量,以计算 metrics
  • 聚合 - 求和等操作,以计算 metrics
  • 最后 - (可选) 最终处理,以计算 metrics. 例如,求均值,最小值,最大值等.

例如,为了计算 mean_absolute_error,需要将变量 counttotal 初始化为0.
在聚合计算中,选择部分预测值和标签值,计算其绝对差值,并加到 total.
每次观察另一些值,count 递增. 最终,total 除以 count 以得到均值.

例如:

images, labels = LoadTestData(...)
predictions = MyModel(images)

mae_value_op, mae_update_op = slim.metrics.streaming_mean_absolute_error(predictions, labels)
mre_value_op, mre_update_op = slim.metrics.streaming_mean_relative_error(predictions, labels)
pl_value_op, pl_update_op = slim.metrics.percentage_less(mean_relative_errors, 0.3)

TF-Slim 还提供了两个函数:

# Aggregates the value and update ops in two lists:
value_ops, update_ops = slim.metrics.aggregate_metrics(
    slim.metrics.streaming_mean_absolute_error(predictions, labels),
    slim.metrics.streaming_mean_squared_error(predictions, labels))

# Aggregates the value and update ops in two dictionaries:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
    "eval/mean_absolute_error": slim.metrics.streaming_mean_absolute_error(predictions, labels),
    "eval/mean_squared_error": slim.metrics.streaming_mean_squared_error(predictions, labels),
})

<h3>6.2 追踪多个 Metrics 示例</h3>

import tensorflow as tf
import tensorflow.contrib.slim.nets as nets

slim = tf.contrib.slim
vgg = nets.vgg


# 加载数据
images, labels = load_data(...)

# 定义网络
predictions = vgg.vgg_16(images)

# 选择计算的 metrics:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
    "eval/mean_absolute_error": slim.metrics.streaming_mean_absolute_error(predictions, labels),
    "eval/mean_squared_error": slim.metrics.streaming_mean_squared_error(predictions, labels),
})

# Evaluate the model using 1000 batches of data:
num_batches = 1000

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    sess.run(tf.local_variables_initializer())

    for batch_id in range(num_batches):
        sess.run(names_to_updates.values())

    metric_values = sess.run(names_to_values.values())
    for metric, value in zip(names_to_values.keys(), metric_values):
        print('Metric %s has value: %f' % (metric, value))

metric.py 可以在没有 layersloss_ops.py 时独立使用.

<h3>6.3 循环评估</h3>

TF-Slim 提供了评估模块 - evaluation.py,包含了采用 metrics 从metric_ops.py 写入评测脚本的 helper 函数. 主要包括,周期地运行评估,对 batch 数据计算 metric,以及打印和 summarizing metric 结果.

例如:

import tensorflow as tf

slim = tf.contrib.slim

# Load the data
images, labels = load_data(...)

# Define the network
predictions = MyModel(images)

# Choose the metrics to compute:
names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({
    'accuracy': slim.metrics.accuracy(predictions, labels),
    'precision': slim.metrics.precision(predictions, labels),
    'recall': slim.metrics.recall(mean_relative_errors, 0.3),
})

# Create the summary ops such that they also print out to std output:
summary_ops = []
for metric_name, metric_value in names_to_values.iteritems():
    op = tf.summary.scalar(metric_name, metric_value)
    op = tf.Print(op, [metric_value], metric_name)
    summary_ops.append(op)

num_examples = 10000
batch_size = 32
num_batches = math.ceil(num_examples / float(batch_size))

# Setup the global step.
slim.get_or_create_global_step()

output_dir = ... # Where the summaries are stored.
eval_interval_secs = ... # How often to run the evaluation.
slim.evaluation.evaluation_loop(
    'local',
    checkpoint_dir,
    log_dir,
    num_evals=num_batches,
    eval_op=names_to_updates.values(),
    summary_op=tf.summary.merge(summary_ops),
    eval_interval_secs=eval_interval_secs)
Last modification:October 9th, 2018 at 09:31 am