tensorflow中slim模块api介绍

本文涉及的产品
函数计算FC,每月15万CU 3个月
简介:

最近需要使用slim模块,先把slim的github readme放在这里,后续会一点一点翻译

github:https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim
TensorFlow-Slim

TF-Slim is a lightweight library for defining, training and evaluating complexmodels in TensorFlow. Components of tf-slim can be freely mixed with nativetensorflow, as well as other frameworks, such as tf.contrib.learn.
TF-Slim是tensorflow中定义、训练和评估复杂模型的轻量级库。tf-slim中的组件可以轻易地和原生tensorflow框架以及例如tf.contrib.learn这样的框架进行整合。
Usage

[python] view plain copy
import tensorflow.contrib.slim as slim

Why TF-Slim?

TF-Slim is a library that makes building, training and evaluation neuralnetworks simple:
Allows the user to define models much more compactly by eliminatingboilerplate code. This is accomplished through the use ofargument scopingand numerous high levellayersandvariables.These tools increase readability and maintainability, reduce the likelihoodof an error from copy-and-pasting hyperparameter values and simplifieshyperparameter tuning.
Makes developing models simple by providing commonly usedregularizers.
Several widely used computer vision models (e.g., VGG, AlexNet) have beendeveloped in slim, and areavailableto users. These can either be used as black boxes, or can be extended in variousways, e.g., by adding "multiple heads" to different internal layers.
Slim makes it easy to extend complex models, and to warm start trainingalgorithms by using pieces of pre-existing model checkpoints.
What are the various components of TF-Slim?

TF-Slim is composed of several parts which were design to exist independently.These include the following main pieces (explained in detail below).
arg_scope:provides a new scope namedarg_scope that allows a user to define defaultarguments for specific operations within that scope.
data:contains TF-slim'sdatasetdefinition,data providers,parallel_reader,anddecodingutilities.
evaluation:contains routines for evaluating models.
layers:contains high level layers for building models using tensorflow.
learning:contains routines for training models.
losses:contains commonly used loss functions.
metrics:contains popular evaluation metrics.
nets:contains popular network definitions such asVGGandAlexNetmodels.
queues:provides a context manager for easily and safely starting and closingQueueRunners.
regularizers:contains weight regularizers.
variables:provides convenience wrappers for variable creation and manipulation.
Defining Models

Models can be succinctly defined using TF-Slim by combining its variables, layers and scopes. Each of these elements are defined below.
利用TF-Slim通过合并variables, layers and scopes,模型可以简洁地进行定义。各元素定义如下。
Variables

Creating Variables in native tensorflow requires either a predefined value or an initialization mechanism (e.g. randomly sampled from a Gaussian). Furthermore, if a variable needs to be created on a specific device, such as a GPU, the specification must be made explicit.To alleviate the code required for variable creation, TF-Slim provides a set of thin wrapper functions invariables.py which allow callers to easily define variables.
想在原生tensorflow中创建变量,要么需要一个预定义值,要么需要一种初始化机制。此外,如果变量需要在特定的设备上创建,比如GPU上,则必要要显式指定。为了简化代码的变量创建,TF-Slim在variables.py中提供了一批轻量级的函数封装,从而是调用者可以更加容易地定义变量。
For example, to create a weight variable, initialize it using a truncated_normal distribution, regularize it with an l2_loss and place it on the CPU, one need only declare the following:
例如,创建一个权值变量,并且用truncated_normal初始化,用L2损失正则化,放置于CPU中,我们只需要定义如下:
[python] view plain copy
weights = slim.variable('weights',

                         shape=[10, 10, 3 , 3],  
                         initializer=tf.truncated_normal_initializer(stddev=0.1),  
                         regularizer=slim.l2_regularizer(0.05),  
                         device='/CPU:0')  

Note that in native TensorFlow, there are two types of variables: regular variables and local (transient) variables. The vast majority of variables are regular variables: once created, they can be saved to disk using a saver. Local variables are those variables that only exist for the duration of a session and are not saved to disk.
在原生tensorflow中,有两种类型的变量:常规变量和局部(临时)变量。绝大部分都是常规变量,它们一旦创建,可以用Saver保存在磁盘上。局部变量则只在一个session期间存在,且不会保存在磁盘上。
TF-Slim further differentiates variables by definingmodel variables, which are variables that represent parameters of a model. Model variables are trained or fine-tuned during learning and are loaded from a checkpoint during evaluation or inference. Examples include the variables created by aslim.fully_connected orslim.conv2d layer. Non-model variables are all other variables that are used during learning or evaluation but are not required for actually performing inference. For example, theglobal_step is a variable using during learning and evaluation but it is not actually part of the model. Similarly, moving average variables might mirror model variables,but the moving averages are not themselves model variables.
TF-Slim通过定义model variables可以进一步区分变量,这种变量代表一个模型的参数。模型变量在学习阶段被训练或微调,在评估和预测阶段从checkpoint中加载。比如通过slim.fully_connected orslim.conv2d进行创建的变量。非模型变量是在学习或评估阶段使用,但不会在预测阶段起作用的变量。例如global_step,它在学习和评估阶段使用,但不是模型的一部分。类似地,移动均值可以mirror模型参数,但是它们本身不是模型变量。
Both model variables and regular variables can be easily created and retrieved via TF-Slim:
通过TF-Slim,模型变量和常规变量都可以很容易地创建和获取:
[python] view plain copy

Model Variables

weights = slim.model_variable('weights',

                          shape=[10, 10, 3 , 3],  
                          initializer=tf.truncated_normal_initializer(stddev=0.1),  
                          regularizer=slim.l2_regularizer(0.05),  
                          device='/CPU:0')  

model_variables = slim.get_model_variables()

Regular variables

my_var = slim.variable('my_var',

                   shape=[20, 1],  
                   initializer=tf.zeros_initializer())  

regular_variables_and_model_variables = slim.get_variables()
How does this work? When you create a model variable via TF-Slim's layers or directly via theslim.model_variable function, TF-Slim adds the variable to thetf.GraphKeys.MODEL_VARIABLES collection. What if you have your own custom layers or variable creation routine but still want TF-Slim to manage or be aware of your model variables? TF-Slim provides a convenience function for adding the model variable to its collection:
这玩意是怎么起作用的呢?当你通过TF-Slim's layers或者直接通过slim.model_variable函数创建一个模型变量,TF-Slim会把这个变量添加到tf.GraphKeys.MODEL_VARIABLES这个集合中。那我们自己的网络层变量怎么让TF-Slim管理呢?TF-Slim提供了一个很方便的函数可以将模型的变量添加到集合中:
[python] view plain copy
my_model_variable = CreateViaCustomCode()

Letting TF-Slim know about the additional variable.

slim.add_model_variable(my_model_variable)

Layers

While the set of TensorFlow operations is quite extensive, developers of neural networks typically think of models in terms of higher level concepts like "layers", "losses", "metrics", and "networks". A layer,such as a Convolutional Layer, a Fully Connected Layer or a BatchNorm Layer are more abstract than a single TensorFlow operation and typically involve several operations. Furthermore, a layer usually (but not always) has variables (tunable parameters) associated with it, unlike more primitive operations. For example, a Convolutional Layer in a neural networkis composed of several low level operations:
tensorflow的操作符集合是十分广泛的,神经网络开发者通常会以更高层的概念,比如"layers", "losses", "metrics", and "networks"去考虑模型。一个层,比如卷积层、全连接层或bn层,要比一个单独的tensorflow操作符更抽象,并且通常会包含若干操作符。此外,和原始操作符不同,一个层经常(不总是)有一些与自己相关的变量(可调参数)。例如,在神经网络中,一个卷积层由许多底层操作符组成:
Creating the weight and bias variables
Convolving the weights with the input from the previous layer
Adding the biases to the result of the convolution.
Applying an activation function.

  1. 创建权重、偏置变量
  2. 将来自上一层的数据和权值进行卷积
  3. 在卷积结果上加上偏置
  4. 应用激活函数
    Using only plain TensorFlow code, this can be rather laborious:

如果只用普通的tensorflow代码,干这个事是相当的费事:
[python] view plain copy
input = ...
with tf.name_scope('conv1_1') as scope:
kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32,

                                       stddev=1e-1), name='weights')  

conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),

                   trainable=True, name='biases')  

bias = tf.nn.bias_add(conv, biases)
conv1 = tf.nn.relu(bias, name=scope)

To alleviate the need to duplicate this code repeatedly, TF-Slim provides a number of convenient operations defined at the more abstract level of neural network layers. For example, compare the code above to an invocation of the corresponding TF-Slim code:
为了缓解重复这些代码,TF-Slim在更抽象的神经网络层的层面上提供了大量方便使用的操作符。比如,将上面的代码和TF-Slim响应的代码调用进行比较:
[python] view plain copy
input = ...
net = slim.conv2d(input, 128, [3, 3], scope='conv1_1')

TF-Slim provides standard implementations for numerous components for building neural networks. These include:
TF-Slim提供了标准接口用于组建神经网络,包括:
Layer TF-Slim
BiasAdd slim.bias_add
BatchNorm slim.batch_norm
Conv2d slim.conv2d
Conv2dInPlane slim.conv2d_in_plane
Conv2dTranspose (Deconv) slim.conv2d_transpose
FullyConnected slim.fully_connected
AvgPool2D slim.avg_pool2d
Dropout slim.dropout
Flatten slim.flatten
MaxPool2D slim.max_pool2d
OneHotEncoding slim.one_hot_encoding
SeparableConv2 slim.separable_conv2d
UnitNorm slim.unit_norm
TF-Slim also provides two meta-operations called repeat andstack that allow users to repeatedly perform the same operation. For example, consider the following snippet from the VGG network whose layers perform several convolutions in a row between pooling layers:
TF-Slim也提供了两个元运算符----repeat和stack,允许用户可以重复地使用相同的运算符。例如,VGG网络的一个片段,这个网络在两个池化层之间就有许多卷积层的堆叠:
[python] view plain copy
net = ...
net = slim.conv2d(net, 256, [3, 3], scope='conv3_1')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_2')
net = slim.conv2d(net, 256, [3, 3], scope='conv3_3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
One way to reduce this code duplication would be via a for loop:
一种减少这种代码重复的方法是使用for循环:
[python] view plain copy
net = ...
for i in range(3):
net = slim.conv2d(net, 256, [3, 3], scope='conv3_' % (i+1))
net = slim.max_pool2d(net, [2, 2], scope='pool2')
This can be made even cleaner by using TF-Slim's repeat operation:
若使用TF-Slim的repeat操作符,代码看起来会更简洁:
[python] view plain copy
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')
Notice that the slim.repeat not only applies the same argument in-line, it also is smart enough to unroll the scopes such that the scopes assigned to each subsequent call of slim.conv2d are appended with an underscore and iterationnumber. More concretely, the scopes in the example above would be named 'conv3/conv3_1', 'conv3/conv3_2' and 'conv3/conv3_3'.
slim.repeat不但可以在一行中使用相同的参数,而且还能智能地展开scope,即每个后续的slim.conv2d调用所对应的scope都会追加下划线及迭代数字。更具体地讲,上面代码的scope分别为 'conv3/conv3_1', 'conv3/conv3_2' and 'conv3/conv3_3'.
Furthermore, TF-Slim's slim.stack operator allows a caller to repeatedly apply the same operation with different arguments to create a stack or tower of layers. slim.stack also creates a new tf.variable_scope for each operation created. For example, a simple way to create a Multi-Layer Perceptron(MLP):
除此之外,TF-Slim的slim.stack操作符允许调用者用不同的参数重复使用相同的操作符是创建一个stack或网络层塔。slim.stack也会为每个创建的操作符生成一个新的scope。例如,下面是一个简单的方法去创建MLP:
[python] view plain copy

Verbose way:

x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope='fc/fc_2')
x = slim.fully_connected(x, 128, scope='fc/fc_3')

Equivalent, TF-Slim way using slim.stack:

slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')
In this example, slim.stack calls slim.fully_connected three times passing the output of one invocation of the function to the next. However, the number of hidden units in each invocation changes from 32 to 64 to 128. Similarly, one can use stack to simplify a tower of multiple convolutions:
在这个例子中,slim.stack调用slim.fully_connected 三次,前一个层的输出是下一层的输入。而每个网络层的输出通道数从32变到64,再到128. 同样,我们可以用stack简化一个多卷积层塔:
[python] view plain copy

Verbose way:

x = slim.conv2d(x, 32, [3, 3], scope='core/core_1')
x = slim.conv2d(x, 32, [1, 1], scope='core/core_2')
x = slim.conv2d(x, 64, [3, 3], scope='core/core_3')
x = slim.conv2d(x, 64, [1, 1], scope='core/core_4')

Using stack:

slim.stack(x, slim.conv2d, [(32, [3, 3]), (32, [1, 1]), (64, [3, 3]), (64, [1, 1])], scope='core')

Scopes

In addition to the types of scope mechanisms in TensorFlow(name_scope, variable_scope ),TF-Slim adds a new scoping mechanism called arg_scope, This new scope allows a user to specify one or more operations and a set of arguments which will be passed to each of the operations defined in the arg_scope. This functionality is best illustrated by example. Consider the following code:
除了tensorflow中自带的scope机制类型(name_scope, variable_scope)外, TF-Slim添加了一种叫做arg_scope的scope机制。这种scope允许用户在arg_scope中指定若干操作符以及一批参数,这些参数会传给前面所有的操作符中。参见以下代码:
[python] view plain copy
net = slim.conv2d(inputs, 64, [11, 11], 4, padding='SAME',

              weights_initializer=tf.truncated_normal_initializer(stddev=0.01),  
              weights_regularizer=slim.l2_regularizer(0.0005), scope='conv1')  

net = slim.conv2d(net, 128, [11, 11], padding='VALID',

              weights_initializer=tf.truncated_normal_initializer(stddev=0.01),  
              weights_regularizer=slim.l2_regularizer(0.0005), scope='conv2')  

net = slim.conv2d(net, 256, [11, 11], padding='SAME',

              weights_initializer=tf.truncated_normal_initializer(stddev=0.01),  
              weights_regularizer=slim.l2_regularizer(0.0005), scope='conv3')  

It should be clear that these three convolution layers share many of the same hyper parameters. Two have the same padding, all three have the same weights_initializer and weight_regularizer. This code is hard to read and contains a lot of repeated values that should be factored out. One solution would be to specify default values using variables:
很明显,这三个卷积层有很多超参数都是相同的。有两个卷积层有相同的padding设置,而且这三个卷积层都有相同的weights_initializer(权值初始化器)和weight_regularizer(权值正则化器)。这段代码很难读,且包含了很多重复的参数值。一种解决办法是用变量指定默认值:
[python] view plain copy
padding = 'SAME'
initializer = tf.truncated_normal_initializer(stddev=0.01)
regularizer = slim.l2_regularizer(0.0005)
net = slim.conv2d(inputs, 64, [11, 11], 4,

              padding=padding,  
              weights_initializer=initializer,  
              weights_regularizer=regularizer,  
              scope='conv1')  

net = slim.conv2d(net, 128, [11, 11],

              padding='VALID',  
              weights_initializer=initializer,  
              weights_regularizer=regularizer,  
              scope='conv2')  

net = slim.conv2d(net, 256, [11, 11],

              padding=padding,  
              weights_initializer=initializer,  
              weights_regularizer=regularizer,  
              scope='conv3')  

This solution ensures that all three convolutions share the exact same parameter values but doesn't reduce completely the code clutter. By using anarg_scope,we can both ensure that each layer uses the same values and simplify the code:
这种方式可以确保这三个卷积层共享相同的参数值,但是仍然没有减少代码规模。通过使用arg_scope,我们既能确保每层共享参数值,又能精简代码:
[python] view plain copy
with slim.arg_scope([slim.conv2d], padding='SAME',

                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01)  
                  weights_regularizer=slim.l2_regularizer(0.0005)):  
net = slim.conv2d(inputs, 64, [11, 11], scope='conv1')  
net = slim.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2')  
net = slim.conv2d(net, 256, [11, 11], scope='conv3')  

As the example illustrates, the use of arg_scope makes the code cleaner, simpler and easier to maintain. Notice that while argument values are specified in the arg_scope, they can be overwritten locally. In particular, while the padding argument has been set to 'SAME', the second convolution overrides it with the value of 'VALID'.
如例所示,arg_scope使代码更简洁且易于维护。注意,在arg_scope中被指定的参数值,也可以在局部位置进行覆盖。比如,padding参数设置为'SAME', 而第二个卷积层仍然可以通过把它设为'VALID'而覆盖掉arg_scope中的默认设置。
One can also nest arg_scopes and use multiple operations in the same scope.For example:
我们可以嵌套arg_scope, 也可以在一个scope中指定多个操作符,例如
[python] view plain copy
with slim.arg_scope([slim.conv2d, slim.fully_connected],

                  activation_fn=tf.nn.relu,  
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),  
                  weights_regularizer=slim.l2_regularizer(0.0005)):  

with slim.arg_scope([slim.conv2d], stride=1, padding='SAME'):

net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')  
net = slim.conv2d(net, 256, [5, 5],  
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.03),  
                  scope='conv2')  
net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc')  

In this example, the firstarg_scope applies the sameweights_initializer andweights_regularizer arguments to theconv2d and fully_connected layers in its scope. In the secondarg_scope, additional default arguments to conv2d only are specified.
在这个例子中,第一个arg_scope对处于它的scope中的conv2d和fully_connected操作层应用相同的weights_initializer andweights_regularizer参数。在第二个arg_scope中,默认参数只是在conv2d中指定。
Working Example: Specifying the VGG16 Layers

By combining TF-Slim Variables, Operations and scopes, we can write a normally very complex network with very few lines of code. For example, the entire VGG architecture can bedefined with just the following snippet:
通过整合TF-Slim的变量、操作符和scope,我们可以用寥寥几行代码写一个通常非常复杂的网络。例如,完整的VGG结构只需要用下面的一小段代码定义:
[python] view plain copy
def vgg16(inputs):
with slim.arg_scope([slim.conv2d, slim.fully_connected],

                  activation_fn=tf.nn.relu,  
                  weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),  
                  weights_regularizer=slim.l2_regularizer(0.0005)):  
net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')  
net = slim.max_pool2d(net, [2, 2], scope='pool1')  
net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')  
net = slim.max_pool2d(net, [2, 2], scope='pool2')  
net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')  
net = slim.max_pool2d(net, [2, 2], scope='pool3')  
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')  
net = slim.max_pool2d(net, [2, 2], scope='pool4')  
net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')  
net = slim.max_pool2d(net, [2, 2], scope='pool5')  
net = slim.fully_connected(net, 4096, scope='fc6')  
net = slim.dropout(net, 0.5, scope='dropout6')  
net = slim.fully_connected(net, 4096, scope='fc7')  
net = slim.dropout(net, 0.5, scope='dropout7')  
net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc8')  

return net
Training Models

Training Tensorflow models requires a model, a loss function, the gradient computation and a training routine that iteratively computes the gradients of the model weights relative to the loss and updates the weights accordingly.TF-Slim provides both common loss functions and a set of helper functions that run the training and evaluation routines.
训练一个tensorflow模型,需要一个网络模型,一个损失函数,梯度计算方式和用于迭代计算模型权重的训练过程。TF-Slim提供了损失函数,同时也提供了一批运行训练和评估模型的帮助函数。
Losses

The loss function defines a quantity that we want to minimize. For classification problems, this is typically the cross entropy between the true distribution and the predicted probability distribution across classes. For regression problems, this is often the sum-of-squares differences between the predicted and true values.
损失函数定义了我们想最小化的量。对于分裂问题,它通常是真实分布和预测概率分布的交叉熵。对于回归问题,它通常是真实值和预测值的平方和。
Certain models, such as multi-task learning models, require the use of multiple loss functions simultaneously. In other words, the loss function ultimately being minimized is the sum of various other loss functions. For example, consider a model that predicts both the type of scene in an image as well as the depth from the camera of each pixel. This model's loss function would be the sum of the classification loss and depth prediction loss.
对于特定的模型,比如多任务学习模型,可能需要同时使用多个损失函数。换句话说,正在最小化的损失函数是其他一些损失函数的和。例如,有一个模型既要预测图像中场景的类型,又要预测每个像素的深度。那这个模型的损失函数就是分类损失和深度预测损失的和。
TF-Slim provides an easy-to-use mechanism for defining and keeping track of loss functions via the losses module. Consider the simple case where we want to train the VGG network:
TF-Slim通过losses模块,提供了一种易用的机制去定义和跟踪损失函数的足迹。看一个简单的例子,我们想训练VGG网络:
[python] view plain copy
import tensorflow as tf
vgg = tf.contrib.slim.nets.vgg

Load the images and labels.

images, labels = ...

Create the model.

predictions, _ = vgg.vgg_16(images)

Define the loss functions and get the total loss.

loss = slim.losses.softmax_cross_entropy(predictions, labels)

In this example, we start by creating the model (using TF-Slim's VGG implementation), and add the standard classification loss. Now, lets turn to the case where we have a multi-task model that produces multiple outputs:
在上面的例子中,我们首先创建了模型(用TF-Slim的VGG接口实现),并添加了标准的分类损失。现在,我们再看一个产生多输出的多任务模型:
[python] view plain copy

Load the images and labels.

images, scene_labels, depth_labels = ...

Create the model.

scene_predictions, depth_predictions = CreateMultiTaskModel(images)

Define the loss functions and get the total loss.

classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels)
sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels)

The following two lines have the same effect:

total_loss = classification_loss + sum_of_squares_loss
total_loss = slim.losses.get_total_loss(add_regularization_losses=False)

In this example, we have two losses which we add by calling slim.losses.softmax_cross_entropy and slim.losses.sum_of_squares. We can obtain the total loss by adding them together (total_loss) or by calling slim.losses.get_total_loss(). How did this work? When you create a loss function via TF-Slim, TF-Slim adds the loss to a special TensorFlow collection of loss functions. This enables you to either manage the total loss manually, or allow TF-Slim to manage them for you.
在这个例子中,我们有两个损失,分别是通过slim.losses.softmax_cross_entropy和 slim.losses.sum_of_squares得到的。我们既可以通过相加得到total_loss,也可以通过slim.losses.get_total_loss()得到total_loss。这是怎么做到的呢?当你通过TF-Slim创建一个损失函数时,TF-Slim会把损失加入到一个特殊的Tensorflow的损失函数集合中。这样你既可以手动管理损失函数,也可以托管给TF-Slim。
What if you want to let TF-Slim manage the losses for you but have a custom loss function?loss_ops.py also has a function that adds this loss to TF-Slims collection. For example:
如果我们有一个自定义的损失函数,现在也想托管给TF-Slim,该怎么做呢?loss_ops.py也有一个函数可以将这个损失函数加入到TF-Slim集合中。例如
[java] view plain copy

Load the images and labels.

images, scene_labels, depth_labels, pose_labels = ...

Create the model.

scene_predictions, depth_predictions, pose_predictions = CreateMultiTaskModel(images)

Define the loss functions and get the total loss.

classification_loss = slim.losses.softmax_cross_entropy(scene_predictions, scene_labels)
sum_of_squares_loss = slim.losses.sum_of_squares(depth_predictions, depth_labels)
pose_loss = MyCustomLossFunction(pose_predictions, pose_labels)
slim.losses.add_loss(pose_loss) # Letting TF-Slim know about the additional loss.

The following two ways to compute the total loss are equivalent:

regularization_loss = tf.add_n(slim.losses.get_regularization_losses())
total_loss1 = classification_loss + sum_of_squares_loss + pose_loss + regularization_loss

(Regularization Loss is included in the total loss by default).

total_loss2 = slim.losses.get_total_loss()

In this example, we can again either produce the total loss function manually or let TF-Slim know about the additional loss and let TF-Slim handle the losses.
这个例子中,我们同样既可以手动管理损失函数,也可以让TF-Slim知晓这个自定义损失函数,然后托管给TF-Slim。
Training Loop

TF-Slim provides a simple but powerful set of tools for training models found inlearning.py. These include a Train function that repeatedly measures the loss, computes gradients and saves the model to disk, as well as several convenience functions for manipulating gradients. For example, once we've specified the model, the loss function and the optimization scheme, we can call slim.learning.create_train_op andslim.learning.train to perform the optimization:
在learning.py中,TF-Slim提供了简单却非常强大的训练模型的工具集。包括Train函数,可以重复地测量损失,计算梯度以及保存模型到磁盘中,还有一些方便的函数用于操作梯度。例如,当我们定义好了模型、损失函数以及优化方式,我们就可以调用slim.learning.create_train_op andslim.learning.train 去执行优化:
[python] view plain copy
g = tf.Graph()

Create the model and specify the losses...

...

total_loss = slim.losses.get_total_loss()
optimizer = tf.train.GradientDescentOptimizer(learning_rate)

create_train_op ensures that each time we ask for the loss, the update_ops

are run and the gradients being computed are applied too.

train_op = slim.learning.create_train_op(total_loss, optimizer)
logdir = ... # Where checkpoints are stored.

slim.learning.train(

train_op,  
logdir,  
number_of_steps=1000,  
save_summaries_secs=300,  
save_interval_secs=600):  

In this example, slim.learning.train is provided with thetrain_op which is used to (a) compute the loss and (b) apply the gradient step.logdir specifies the directory where the checkpoints and event files are stored. We can limit the number of gradient steps taken to any number. In this case, we've asked for1000 steps to be taken. Finally, save_summaries_secs=300 indicates that we'll compute summaries every 5 minutes and save_interval_secs=600 indicates that we'll save a model checkpoint every 10 minutes.
在该例中,slim.learning.train根据train_op计算损失、应用梯度step。logdir指定了checkpoints和event文件的存储路径。我们可以限制梯度step到任何数值。这里我们采用1000步。最后,save_summaries_secs=300表示每5分钟计算一次summaries,save_interval_secs=600表示每10分钟保存一次模型的checkpoint。
Working Example: Training the VGG16 Model

To illustrate this, lets examine the following sample of training the VGG network:
为了说明,让我们测试以下训练VGG的例子:
[python] view plain copy
import tensorflow as tf

slim = tf.contrib.slim
vgg = tf.contrib.slim.nets.vgg

...

train_log_dir = ...
if not tf.gfile.Exists(train_log_dir):
tf.gfile.MakeDirs(train_log_dir)

with tf.Graph().as_default():
# Set up the data loading:
images, labels = ...

# Define the model:
predictions = vgg.vgg16(images, is_training=True)

# Specify the loss function:
slim.losses.softmax_cross_entropy(predictions, labels)

total_loss = slim.losses.get_total_loss()
tf.summary.scalar('losses/total_loss', total_loss)

# Specify the optimization scheme:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)

# create_train_op that ensures that when we evaluate it to get the loss,
# the update_ops are done and the gradient updates are computed.
train_tensor = slim.learning.create_train_op(total_loss, optimizer)

# Actually runs training.
slim.learning.train(train_tensor, train_log_dir)
Fine-Tuning Existing Models

Brief Recap on Restoring Variables from a Checkpoint

对从checkpoint加载variables的简略概括
After a model has been trained, it can be restored using tf.train.Saver() which restores Variables from a given checkpoint. For many cases,tf.train.Saver() provides a simple mechanism to restore all or just a few variables.
在一个模型训练完成后,我们可以用tf.train.Saver()通过指定checkpoing加载variables的方式加载这个模型。对于很多情况,tf.train.Saver()提供了一种简单的机制去加载所有或一些varialbes变量。
[python] view plain copy

Create some variables.

v1 = tf.Variable(..., name="v1")
v2 = tf.Variable(..., name="v2")
...

Add ops to restore all the variables.

restorer = tf.train.Saver()

Add ops to restore some variables.

restorer = tf.train.Saver([v1, v2])

Later, launch the model, use the saver to restore variables from disk, and

do some work with the model.

with tf.Session() as sess:
# Restore variables from disk.
restorer.restore(sess, "/tmp/model.ckpt")
print("Model restored.")
# Do some work with the model
...
See Restoring Variables and Choosing which Variables to Save and Restore sections of the Variables page for more details.
参阅Variables章中Restoring Variables和Choosing which Variables to Save and Restore 相关部分,获取更多细节。
Partially Restoring Models

It is often desirable to fine-tune a pre-trained model on an entirely new dataset or even a new task. In these situations, one can use TF-Slim's helper functions to select a subset of variables to restore:
有时我们希望在一个全新的数据集上或面对一个信息任务方向去微调预训练模型。在这些情况下,我们可以使用TF-Slim's的帮助函数去加载模型中变量的一个子集:
[python] view plain copy

Create some variables.

v1 = slim.variable(name="v1", ...)
v2 = slim.variable(name="nested/v2", ...)
...

Get list of variables to restore (which contains only 'v2'). These are all

equivalent methods:

variables_to_restore = slim.get_variables_by_name("v2")

or

variables_to_restore = slim.get_variables_by_suffix("2")

or

variables_to_restore = slim.get_variables(scope="nested")

or

variables_to_restore = slim.get_variables_to_restore(include=["nested"])

or

variables_to_restore = slim.get_variables_to_restore(exclude=["v1"])

Create the saver which will be used to restore the variables.

restorer = tf.train.Saver(variables_to_restore)

with tf.Session() as sess:
# Restore variables from disk.
restorer.restore(sess, "/tmp/model.ckpt")
print("Model restored.")
# Do some work with the model
...
Restoring models with different variable names

用不同的变量名加载模型
When restoring variables from a checkpoint, theSaverlocates the variable names in a checkpoint file and maps them to variables in the current graph. Above, we created a saver by passing to it a list of variables. In this case, the names of the variables to locate in the checkpoint file were implicitly obtained from each provided variable's var.op.name.
当从checkpoint加载变量时,Saver先在checkpoint中定位变量名,然后映射到当前图的变量中。我们也可以通过向saver传递一个变量列表来创建saver。这时,在checkpoint文件中用于定位的变量名可以隐式地从各自的var.op.name中获得。
This works well when the variable names in the checkpoint file match those in the graph. However, sometimes, we want to restore a model from a checkpoint whose variables have different names those in the current graph. In this case,we must provide the Saver a dictionary that maps from each checkpoint variable name to each graph variable. Consider the following example where the checkpoint variables names are obtained via a simple function:
当checkpoint文件中的变量名与当前图中的变量名完全匹配时,这会运行得很好。但是,有时我们想从一个变量名与与当前图的变量名不同的checkpoint文件中装载一个模型。这时,我们必须提供一个saver字典,这个字典对checkpoint中的每个变量和每个图变量进行了一一映射。请看下面这个例子,checkpoint的变量是通过一个简单的函数获得的:
[python] view plain copy

Assuming than 'conv1/weights' should be restored from 'vgg16/conv1/weights'

def name_in_checkpoint(var):
return 'vgg16/' + var.op.name

Assuming than 'conv1/weights' and 'conv1/bias' should be restored from 'conv1/params1' and 'conv1/params2'

def name_in_checkpoint(var):
if "weights" in var.op.name:

return var.op.name.replace("weights", "params1")  

if "bias" in var.op.name:

return var.op.name.replace("bias", "params2")  

variables_to_restore = slim.get_model_variables()
variables_to_restore = {name_in_checkpoint(var):var for var in variables_to_restore}
restorer = tf.train.Saver(variables_to_restore)

with tf.Session() as sess:
# Restore variables from disk.
restorer.restore(sess, "/tmp/model.ckpt")
Fine-Tuning a Model on a different task

Consider the case where we have a pre-trained VGG16 model. The model was trained on the ImageNet dataset, which has 1000 classes. However, we would like to apply it to the Pascal VOC dataset which has only 20 classes. To do so, we can initialize our new model using the values of the pre-trained model excluding the final layer:
假设我们有一个已经预训练好的VGG16的模型。这个模型是在拥有1000分类的ImageNet数据集上进行训练的。但是,现在我们想把它应用在只具有20个分类的Pascal VOC数据集上。为了能这样做,我们可以通过利用除最后一些全连接层的其他预训练模型值来初始化新模型的达到目的:
[python] view plain copy

Load the Pascal VOC data

image, label = MyPascalVocDataLoader(...)
images, labels = tf.train.batch([image, label], batch_size=32)

Create the model

predictions = vgg.vgg_16(images)

train_op = slim.learning.create_train_op(...)

Specify where the Model, trained on ImageNet, was saved.

model_path = '/path/to/pre_trained_on_imagenet.checkpoint'
metric_ops.py

Specify where the new model will live:

log_dir =

from_checkpoint_

'/path/to/my_pascal_model_dir/'

Restore only the convolutional layers:

variables_to_restore = slim.get_variables_to_restore(exclude=['fc6', 'fc7', 'fc8'])
init_fn = assign_from_checkpoint_fn(model_path, variables_to_restore)

Start training.

slim.learning.train(train_op, log_dir, init_fn=init_fn)
Evaluating Models.

Once we've trained a model (or even while the model is busy training) we'd like to see how well the model performs in practice. This is accomplished by picking a set of evaluation metrics, which will grade the models performance, and the evaluation code which actually loads the data, performs inference, compares the results to the ground truth and records the evaluation scores. This step may be performed once or repeated periodically.
一旦我们训练好了一个模型(或者模型还在训练中),我们想看一下模型在实际中性能如何。这可以通过获取一系列表征模型性能的评估指标来实现,评估代码一般会加载数据,执行前向传播,和ground truth进行比较并记录评估分数。这个步骤可能执行一次,也可能周期性地执行。
Metrics

We define a metric to be a performance measure that is not a loss function(losses are directly optimized during training), but which we are still interested in for the purpose of evaluating our model.For example, we might want to minimize log loss, but our metrics of interest might be F1 score (test accuracy), or Intersection Over Union score (which are not differentiable, and therefore cannot be used as losses).
比如我们定义了一个不是损失函数的性能度量指标(损失在训练过程中进行直接优化),而这个指标出于评估模型的目的我们还非常感兴趣。比如说我们想最小化log损失,但是我们感兴趣的指标可能是F1 score(测试准确率),或者IoU分数(这个指标不可微,因此不能作为损失)。
TF-Slim provides a set of metric operations that makes evaluating models easy. Abstractly, computing the value of a metric can be divided into three parts:
Initialization: initialize the variables used to compute the metrics.
Aggregation: perform operations (sums, etc) used to compute the metrics.
Finalization: (optionally) perform any final operation to compute metric values. For example, computing means, mins, maxes, etc.
TF-Slim提供了一系列指标操作符,它们可以使模型评估更简单。抽象来讲,计算一个指标值可以分为3步:

   1. 初始化:初始化用于计算指标的变量。
   2. 聚合:执行用于计算指标的运算流程(比如sum)。
   3. 收尾:(可选)执行其他用于计算指标值的操作。例如,计算mean、min、max等。

For example, to compute mean_absolute_error, two variables, a count and total variable are initialized to zero. During aggregation, we observed some set of predictions and labels, compute their absolute differences and add the total to total. Each time we observe another value,count is incremented. Finally, duringfinalization,total is divided by count to obtain the mean.
例如,为了计算绝对平均误差,一个count变量和一个total变量需要初始化为0. 在聚合阶段,我们可以观察到一系列预测值及标签,计算他们差的绝对值,并加到total中。每次循环,count变量自加1。最后,在收尾阶段,total除以count就得到了mean。
The following example demonstrates the API for declaring metrics. Because metrics are often evaluated on a test set which is different from the training set (upon which the loss is computed), we'll assume we're using test data:
下面的例子演示了定义指标的API。因为指标通常是在测试集上计算,而不是训练集(训练集上是用于计算loss的),我们假设我们在使用测试集:
[python] view plain copy
images, labels = LoadTestData(...)
predictions = MyModel(images)

mae_value_op, mae_update_op = slim.metrics.streaming_mean_absolute_error(predictions, labels)
mre_value_op, mre_update_op = slim.metrics.streaming_mean_relative_error(predictions, labels)
pl_value_op, pl_update_op = slim.metrics.percentage_less(mean_relative_errors, 0.3)
As the example illustrates, the creation of a metric returns two values:a value_op and anupdate_op. The value_op is an idempotent operation that returns the current value of the metric. The update_op is an operation that performs the aggregation step mentioned above as well as returning the value of the metric.
如上例所示,指标的创建会返回两个值,一个value_op和一个update_op。value_op表示和当前指标值幂等的操作。update_op是上文提到的执行聚合步骤并返回指标值的操作符。
Keeping track of each value_op andupdate_op can be laborious. To deal with this, TF-Slim provides two convenience functions:
跟踪每个value_op和update_op是非常费劲的。为了解决这个问题,TF-Slim提供了两个方便的函数:
[python] view plain copy

Aggregates the value and update ops in two lists:

value_ops, update_ops = slim.metrics.aggregate_metrics(

slim.metrics.streaming_mean_absolute_error(predictions, labels),  
slim.metrics.streaming_mean_squared_error(predictions, labels))  

Aggregates the value and update ops in two dictionaries:

names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({

"eval/mean_absolute_error": slim.metrics.streaming_mean_absolute_error(predictions, labels),  
"eval/mean_squared_error": slim.metrics.streaming_mean_squared_error(predictions, labels),  

})
Working example: Tracking Multiple Metrics

Putting it all together:
把上面讲到的我们整合在一起:
[python] view plain copy
import tensorflow as tf

slim = tf.contrib.slim
vgg = tf.contrib.slim.nets.vgg

Load the data

images, labels = load_data(...)

Define the network

predictions = vgg.vgg_16(images)

Choose the metrics to compute:

names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({

"eval/mean_absolute_error": slim.metrics.streaming_mean_absolute_error(predictions, labels),  
"eval/mean_squared_error": slim.metrics.streaming_mean_squared_error(predictions, labels),  

})

Evaluate the model using 1000 batches of data:

num_batches = 1000

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())

for batch_id in range(num_batches):

sess.run(names_to_updates.values())  

metric_values = sess.run(names_to_values.values())
for metric, value in zip(names_to_values.keys(), metric_values):

print('Metric %s has value: %f' % (metric, value))  

Note that metric_ops.py can be used in isolation without using either layers.py or loss_ops.py
注意,metric_ops.py可以在没有layers.py和loss_ops.py的情况下独立使用。

Evaluation Loop

TF-Slim provides an evaluation module(evaluation.py),which contains helper functions for writing model evaluation scripts using metrics from the metric_ops.py module. These include a function for periodically running evaluations,evaluating metrics over batches of data and printing and summarizing metric results. For example:
TF-Slim提供了一个评估模块(evaluation.py),这个模块包含了一些利用来自metric_ops.py模块的指标写模型评估脚本的帮助函数。其中包含一个可以周期运行评估,评估数据batch之间的指标、打印并总结指标结果的函数。例如:
import tensorflow as tf

slim = tf.contrib.slim

Load the data

images, labels = load_data(...)

Define the network

predictions = MyModel(images)

Choose the metrics to compute:

names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({

'accuracy': slim.metrics.accuracy(predictions, labels),
'precision': slim.metrics.precision(predictions, labels),
'recall': slim.metrics.recall(mean_relative_errors, 0.3),

})

Create the summary ops such that they also print out to std output:

summary_ops = []
for metric_name, metric_value in names_to_values.iteritems():
op = tf.summary.scalar(metric_name, metric_value)
op = tf.Print(op, [metric_value], metric_name)
summary_ops.append(op)

num_examples = 10000
batch_size = 32
num_batches = math.ceil(num_examples / float(batch_size))

Setup the global step.

slim.get_or_create_global_step()

output_dir = ... # Where the summaries are stored.
eval_interval_secs = ... # How often to run the evaluation.
slim.evaluation.evaluation_loop(

'local',
checkpoint_dir,
log_dir,
num_evals=num_batches,
eval_op=names_to_updates.values(),
summary_op=tf.summary.merge(summary_ops),
eval_interval_secs=eval_interval_secs)

致敬原创:http://blog.csdn.net/guvcolie/article/details/77686555

相关实践学习
【文生图】一键部署Stable Diffusion基于函数计算
本实验教你如何在函数计算FC上从零开始部署Stable Diffusion来进行AI绘画创作,开启AIGC盲盒。函数计算提供一定的免费额度供用户使用。本实验答疑钉钉群:29290019867
建立 Serverless 思维
本课程包括: Serverless 应用引擎的概念, 为开发者带来的实际价值, 以及让您了解常见的 Serverless 架构模式
目录
相关文章
|
4月前
|
机器学习/深度学习 TensorFlow API
TensorFlow 2常用模块
【8月更文挑战第18天】TensorFlow 2常用模块。
51 11
|
4月前
|
前端开发 API 网络架构
【Azure 应用服务】能否通过 Authentication 模块配置 Azure AD 保护 API 应用?
【Azure 应用服务】能否通过 Authentication 模块配置 Azure AD 保护 API 应用?
|
4月前
|
机器学习/深度学习 API 算法框架/工具
【Tensorflow+keras】Keras API三种搭建神经网络的方式及以mnist举例实现
使用Keras API构建神经网络的三种方法:使用Sequential模型、使用函数式API以及通过继承Model类来自定义模型,并提供了基于MNIST数据集的示例代码。
63 12
|
4月前
|
机器学习/深度学习 API 算法框架/工具
【Tensorflow+keras】Keras API两种训练GAN网络的方式
使用Keras API以两种不同方式训练条件生成对抗网络(CGAN)的示例代码:一种是使用train_on_batch方法,另一种是使用tf.GradientTape进行自定义训练循环。
50 5
|
4月前
|
TensorFlow API 算法框架/工具
【Tensorflow 2】Keras API+Estimator的使用
本文介绍了在TensorFlow 2中结合Keras API和Estimator API来构建和训练模型的方法,并提供了一个示例流程,包括构建模型、生成数据集、使用Estimator进行训练以及评估模型性能。
46 3
|
4月前
|
UED 开发工具 iOS开发
Uno Platform大揭秘:如何在你的跨平台应用中,巧妙融入第三方库与服务,一键解锁无限可能,让应用功能飙升,用户体验爆棚!
【8月更文挑战第31天】Uno Platform 让开发者能用同一代码库打造 Windows、iOS、Android、macOS 甚至 Web 的多彩应用。本文介绍如何在 Uno Platform 中集成第三方库和服务,如 Mapbox 或 Google Maps 的 .NET SDK,以增强应用功能并提升用户体验。通过 NuGet 安装所需库,并在 XAML 页面中添加相应控件,即可实现地图等功能。尽管 Uno 平台减少了平台差异,但仍需关注版本兼容性和性能问题,确保应用在多平台上表现一致。掌握正确方法,让跨平台应用更出色。
62 0
|
4月前
|
缓存 JavaScript 前端开发
为开源项目 go-gin-api 增加 WebSocket 模块
为开源项目 go-gin-api 增加 WebSocket 模块
50 2
|
4月前
|
Kubernetes 监控 API
在k8S中,各模块如何与API Server进行通信的?
在k8S中,各模块如何与API Server进行通信的?
VRTK4⭐三.射线传送模块 [包含API传送]
VRTK4⭐三.射线传送模块 [包含API传送]
|
4月前
|
API 算法框架/工具
【Tensorflow+keras】使用keras API保存模型权重、plot画loss损失函数、保存训练loss值
使用keras API保存模型权重、plot画loss损失函数、保存训练loss值
37 0