Densenet-Tensorflow

简介: 在寻找densnet网络的时候,我发现了一个结构清晰完整的网络代码,在此作备份。https://github.com/taki0112/Densenet-TensorflowDensenet-TensorflowTensorflow implementation of Densenet usi...

在寻找densnet网络的时候,我发现了一个结构清晰完整的网络代码,在此作备份。

https://github.com/taki0112/Densenet-Tensorflow

Densenet-Tensorflow

Tensorflow implementation of Densenet using Cifar10, MNIST

  • The code that implements this paper is Densenet.py
  • There is a slight difference, I used AdamOptimizer

If you want to see the original author's code or other implementations, please refer to this link

 

Requirements

  • Tensorflow 1.x
  • Python 3.x
  • tflearn (If you are easy to use global average pooling, you should install tflearn
However, I implemented it using tf.layers, so don't worry

Issue

  • I used tf.contrib.layers.batch_norm
  def Batch_Normalization(x, training, scope):
        with arg_scope([batch_norm],
                       scope=scope,
                       updates_collections=None,
                       decay=0.9,
                       center=True,
                       scale=True,
                       zero_debias_moving_mean=True) :
            return tf.cond(training,
                           lambda : batch_norm(inputs=x, is_training=training, reuse=None),
                           lambda : batch_norm(inputs=x, is_training=training, reuse=True))

 

  • If not enough GPU memory, Please edit the code
with tf.Session() as sess : NO
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess : OK

Idea

What is the "Global Average Pooling" ?

    def Global_Average_Pooling(x, stride=1) :
        width = np.shape(x)[1]
        height = np.shape(x)[2]
        pool_size = [width, height]
        return tf.layers.average_pooling2d(inputs=x, pool_size=pool_size, strides=stride) 
        # The stride value does not matter
If you use tflearn, please refer to this link
    def Global_Average_Pooling(x):
        return tflearn.layers.conv.global_avg_pool(x, name='Global_avg_pooling')

 

What is the "Dense Connectivity" ?

Dense_connectivity

What is the "Densenet Architecture" ?

Dense_Architecture

    def Dense_net(self, input_x):
        x = conv_layer(input_x, filter=2 * self.filters, kernel=[7,7], stride=2, layer_name='conv0')
        x = Max_Pooling(x, pool_size=[3,3], stride=2)

        x = self.dense_block(input_x=x, nb_layers=6, layer_name='dense_1')
        x = self.transition_layer(x, scope='trans_1')

        x = self.dense_block(input_x=x, nb_layers=12, layer_name='dense_2')
        x = self.transition_layer(x, scope='trans_2')

        x = self.dense_block(input_x=x, nb_layers=48, layer_name='dense_3')
        x = self.transition_layer(x, scope='trans_3')

        x = self.dense_block(input_x=x, nb_layers=32, layer_name='dense_final') 
        
        x = Batch_Normalization(x, training=self.training, scope='linear_batch')
        x = Relu(x)
        x = Global_Average_Pooling(x)
        x = Linear(x)

        return x

 

What is the "Dense Block" ?

Dense_block

   def dense_block(self, input_x, nb_layers, layer_name):
        with tf.name_scope(layer_name):
            layers_concat = list()
            layers_concat.append(input_x)

            x = self.bottleneck_layer(input_x, scope=layer_name + '_bottleN_' + str(0))

            layers_concat.append(x)

            for i in range(nb_layers - 1):
                x = Concatenation(layers_concat)
                x = self.bottleneck_layer(x, scope=layer_name + '_bottleN_' + str(i + 1))
                layers_concat.append(x)

            return x

 

What is the "Bottleneck Layer" ?

 def bottleneck_layer(self, x, scope):
        with tf.name_scope(scope):
            x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
            x = Relu(x)
            x = conv_layer(x, filter=4 * self.filters, kernel=[1,1], layer_name=scope+'_conv1')
            x = Drop_out(x, rate=dropout_rate, training=self.training)

            x = Batch_Normalization(x, training=self.training, scope=scope+'_batch2')
            x = Relu(x)
            x = conv_layer(x, filter=self.filters, kernel=[3,3], layer_name=scope+'_conv2')
            x = Drop_out(x, rate=dropout_rate, training=self.training)
            
            return x

 

What is the "Transition Layer" ?

    def transition_layer(self, x, scope):
        with tf.name_scope(scope):
            x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
            x = Relu(x)
            x = conv_layer(x, filter=self.filters, kernel=[1,1], layer_name=scope+'_conv1')
            x = Drop_out(x, rate=dropout_rate, training=self.training)
            x = Average_pooling(x, pool_size=[2,2], stride=2)

            return x

 

Compare Structure (CNN, ResNet, DenseNet)

compare

Results

  • (MNIST) The highest test accuracy is 99.2% (This result does not use dropout)
  • The number of dense block layers is fixed to 4
    for i in range(self.nb_blocks) :
        # original : 6 -> 12 -> 48

        x = self.dense_block(input_x=x, nb_layers=4, layer_name='dense_'+str(i))
        x = self.transition_layer(x, scope='trans_'+str(i))

 

CIFAR-10

cifar_10

CIFAR-100

cifar_100

Image Net

image_net

Related works

References

Author

Junho Kim

目录
相关文章
|
1月前
|
机器学习/深度学习 TensorFlow API
使用 TensorFlow 和 Keras 构建图像分类器
【10月更文挑战第2天】使用 TensorFlow 和 Keras 构建图像分类器
|
6月前
|
机器学习/深度学习 存储 算法
TensorFlow 卷积神经网络实用指南:6~10
TensorFlow 卷积神经网络实用指南:6~10
128 0
|
6月前
|
机器学习/深度学习 TensorFlow 算法框架/工具
TensorFlow 卷积神经网络实用指南:1~5
TensorFlow 卷积神经网络实用指南:1~5
89 0
|
Ubuntu TensorFlow 算法框架/工具
ResNet实战:tensorflow2.X版本,ResNet50图像分类任务(小数据集)
本例提取了植物幼苗数据集中的部分数据做数据集,数据集共有12种类别,今天我和大家一起实现tensorflow2.X版本图像分类任务,分类的模型使用ResNet50。 通过这篇文章你可以学到: 1、如何加载图片数据,并处理数据。 2、如果将标签转为onehot编码 3、如何使用数据增强。 4、如何使用mixup。 5、如何切分数据集。 6、如何加载预训练模型。
1424 0
ResNet实战:tensorflow2.X版本,ResNet50图像分类任务(小数据集)
|
TensorFlow 算法框架/工具
基于Tensorflow实现Transformer模型
基于Tensorflow实现Transformer模型
268 0
|
TensorFlow 算法框架/工具 计算机视觉
ResNet实战:tensorflow2.0以上版本,使用ResNet50实现图像分类任务
ResNet实战:tensorflow2.0以上版本,使用ResNet50实现图像分类任务
844 0
|
机器学习/深度学习 TensorFlow 算法框架/工具
TensorFlow2.0(7):4种常用的激活函数
TensorFlow2.0(7):4种常用的激活函数
TensorFlow2.0(7):4种常用的激活函数
|
机器学习/深度学习 PyTorch TensorFlow
|
TensorFlow 算法框架/工具 计算机视觉
TensorFlow 实现VGG16图像分类
TensorFlow 实现VGG16图像分类
TensorFlow 实现VGG16图像分类
|
机器学习/深度学习 人工智能 PyTorch
Pytorch 基于LeNet的手写数字识别
Pytorch 基于LeNet的手写数字识别
173 0
Pytorch 基于LeNet的手写数字识别

相关实验场景

更多
下一篇
无影云桌面