ECA-Net:深度卷积神经网络的高效通道注意力

简介: 最近,**通道注意力机制**已被证明在提高深度卷积神经网络 (CNN) 的性能方面具有巨大潜力。然而,大多数现有方法致力于开发更复杂的注意力模块以获得更好的性能,这不可避免地增加了模型的复杂性。为了克服性能和复杂性权衡的悖论,**本文提出了一种高效通道注意 (ECA) 模块,该模块仅涉及少量参数,同时带来明显的性能增益**。通过剖析 SENet 中的通道注意模块,我们凭经验表明**避免降维对于学习通道注意很重要**,**适当的跨通道交互可以在显着降低模型复杂度的同时保持性能**。因此,**我们提出了一种无需降维的局部跨通道交互策略,可以通过一维卷积有效实现**。此外,**我们开发了一种自适应选

@toc

参考论文:https://ieeexplore.ieee.org/document/9156697

作者:Qilong Wang; Banggu Wu; Pengfei Zhu; Peihua Li; Wangmeng Zuo; Qinghua Hu

1、什么是注意力机制?

  注意力机制的核心重点就是让网络关注到它更需要关注的地方

  当我们使用卷积神经网络去处理图片的时候,我们会更希望卷积神经网络去注意应该注意的地方,而不是什么都关注,我们不可能手动去调节需要注意的地方,这个时候,如何让卷积神经网络去自适应的注意重要的物体变得极为重要。注意力机制就是实现网络自适应注意的一个方式。

  注意力机制能够从大量的信息中筛选出重要的信息。在神经网络中引入注意力机制有很多种方法,以卷积神经网络为例,可以在空间维度增加引入注意力机制,也可以在通道维度增加注意力机制(SENet),当然也有混合维度(CBAM),即空间维度和通道维度增加注意力机制。

  一般而言,注意力机制可以分为通道注意力机制空间注意力机制,以及二者的结合

2、简介

  最近,通道注意力机制已被证明在提高深度卷积神经网络 (CNN) 的性能方面具有巨大潜力。然而,大多数现有方法致力于开发更复杂的注意力模块以获得更好的性能,这不可避免地增加了模型的复杂性。为了克服性能和复杂性权衡的悖论,本文提出了一种高效通道注意 (ECA) 模块,该模块仅涉及少量参数,同时带来明显的性能增益。通过剖析 SENet 中的通道注意模块,我们凭经验表明避免降维对于学习通道注意很重要适当的跨通道交互可以在显着降低模型复杂度的同时保持性能。因此,我们提出了一种无需降维的局部跨通道交互策略,可以通过一维卷积有效实现。此外,我们开发了一种自适应选择一维卷积核大小的方法,确定局部跨通道交互的覆盖范围。所提出的 ECA 模块高效而有效。

   这是摘要部分的关键,关键点就三个:避免降维、适当跨通道交互(策略:一维卷积)、自适应一维卷积核大小的算法。

  这里作者主要就是对SENet通道注意力机制上的改进,最后去掉了全连接层,在全局平均池化后接上一个一维卷积层。

image-20220821151854674

  上图是使用ResNet作为骨干网络的注意力模块在分类精度、网络参数和FLOPs方面的比较。

   横轴为网络参数量(单位:百万),纵轴为Top-1准确率。

   从图中可看出,ECA-Net有更高的准确率,同时模型复杂复杂度也更低。

3、ECANet注意力模块

3.1 回顾SENet模块

image-20220816114225350

具体实现:

  • 对输入进来的特征层进行全局平均池化。
  • 然后进行两次全连接(这两个全连接可用1*1卷积代替),第一次全连接神经元个数较少,第二次全连接神经元个数和输入特征层个数相同。
  • 在完成两次全连接之后,再取一次sigmoid讲值固定到0-1之间,此时我们获得了输入特征层每一个通道的权值(0-1之间)。
  • 在获得这个权值之后,讲这个权值与原输入特征层相乘即可。
   给定输入特征,SE 块首先对每个通道独立地使用全局平均池化,然后使用两个具有非线性的全连接 (FC) 层,后跟一个 Sigmoid 函数来生成通道权重。两个 FC 层旨在捕获非线性跨通道交互,其中涉及降维以控制模型复杂性。虽然这种策略在后续的通道注意力模块中被广泛使用 [33, 13, 9],==但我们的实证研究表明,降维会给通道注意力预测带来副作用,并且捕获所有通道之间的依赖关系是低效且不必要的==。

3.2 ECANet模块

  ECANet的作者认为降维会给通道注意力预测带来副作用,并且捕获所有通道之间的依赖关系是低效且不必要的。因此,作者提出了一种用于深度 CNN 的 Efficient Channel Attention (ECA) 模块,该模块避免了降维并以有效的方式捕获跨通道交互。如下图所示。

image-20220821152731708

  图 2. 我们的高效通道注意 (ECA) 模块示意图。给定通过全局平均池化 (GAP) 获得的聚合特征,ECA 通过执行大小为 k 的快速 1D 卷积来生成通道权重,其中 k 通过通道维度 C 的映射自适应地确定。

  和上面的SENet模块相比,ECANet在全局平均池化之后去除了全连接层,改用1*1卷积

  在没有降维的通道全局平均池化之后,ECANet使用一维卷积来实现跨通道信息交互,而卷积核的大小通过函数来自适应,内核大小 k 表示局部跨通道交互的覆盖范围,即有多少邻居参与了一个通道的注意力预测。这种方法被证明可以保证效率和有效性。

  给定通道维度 C,卷积核大小 k 可以自适应地确定为:

$$ k=\psi (C)=\left | \frac{\log_{2}{(C)} }{\gamma }+\frac{b}{\gamma } \right |_{odd} $$

  其中$\left | t \right |_{odd}$表示最接近t的奇数。在本文中,我们在所有实验中设置为$\gamma=2$和$b=1$

  显然,通过映射 ψ,高维通道具有更长范围的相互作用,而低维通道通过使用非线性映射进行更短范围的相互作用。
  • (1)将输入特征图经过全局平均池化,特征图从[h,w,c]的矩阵变成[1,1,c]的向量。
  • (2)计算得到自适应的一维卷积核大小kernel_size。
  • (3)将kernel_size用于一维卷积中,得到对于特征图的每个通道的权重。
  • (4)将归一化权重和原输入特征图逐通道相乘,生成加权后的特征图。

3.3 ECANet代码复现

   原论文作者是使用pytorch实现的,由于我比较熟悉Tensorflow,这里就用Tensorflow写了。
import math

import tensorflow as tf
from keras.layers import (Activation, Add, Concatenate, Conv1D, Conv2D, Dense,
                          GlobalAveragePooling2D, GlobalMaxPooling2D, Lambda, BatchNormalization,
                          Reshape, multiply,Input)
from keras.models import Model
'''
ECA模块的思想是非常简单的,它去除了原来SE模块中的全连接层,
直接在全局平均池化之后的特征上通过一个1D卷积进行学习。
'''
def ECA_Block(input_feature, b=1, gamma=2, name=""):
    #获取特征图的通道数
    channel = input_feature.shape[-1]
    # 根据公式计算自适应卷积核大小
    kernel_size = int(abs((math.log(channel, 2) + b) / gamma))
    # 如果是kernel_size是偶数,就使用,否则变成偶数
    kernel_size = kernel_size if kernel_size % 2 else kernel_size + 1
    # 全局平均池化:[None,h,w,c]->[None,c]
    x = GlobalAveragePooling2D()(input_feature)
    # [None,c]->[None,c,1]
    x = Reshape((-1, 1))(x)
    #[None,c,1]->[None,c,1]
    x = Conv1D(1, kernel_size=kernel_size, padding="same",use_bias=False,name="eca_layer_" + str(name))(x)
    x = Activation('sigmoid')(x)
    # [None,c,1]=>[None,1,1,c]
    x = Reshape((1, 1, -1))(x)
    #将注意力权重和输入相乘
    output = multiply([input_feature, x])
    return output

if __name__ == '__main__':
    inputs=Input([26,26,512])
    x=ECA_Block(inputs)
    model=Model(inputs,x)
    model.summary()

image-20220821155506467

Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 26, 26, 512)  0                                            
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 512)          0           input_1[0][0]                    
__________________________________________________________________________________________________
reshape_1 (Reshape)             (None, 512, 1)       0           global_average_pooling2d_1[0][0] 
__________________________________________________________________________________________________
eca_layer_ (Conv1D)             (None, 512, 1)       5           reshape_1[0][0]                  
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 512, 1)       0           eca_layer_[0][0]                 
__________________________________________________________________________________________________
reshape_2 (Reshape)             (None, 1, 1, 512)    0           activation_1[0][0]               
__________________________________________________________________________________________________
multiply_1 (Multiply)           (None, 26, 26, 512)  0           input_1[0][0]                    
                                                                 reshape_2[0][0]                  
==================================================================================================
Total params: 5
Trainable params: 5
Non-trainable params: 0
  这点参数量完全都可以忽略不计了。只有一维卷积层有参数,所以参数量就是一维卷积的卷积核大小(我们没有使用偏置)

4、基于ECA-ResNet50的花朵识别测试

4.1 导入依赖

import numpy as np
import math
import os
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Dense, Dropout, Conv2D, MaxPool2D, Flatten, GlobalAvgPool2D, \
    BatchNormalization, Activation, Add, ZeroPadding2D, Multiply,Conv1D,GlobalAveragePooling2D,Reshape,multiply
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt
from tensorflow.keras.callbacks import LearningRateScheduler
from tensorflow.keras.models import Model

4.2 超参数设置

# 类别数
num_classes = 17
# 批次大小
batch_size = 32
# 周期数
epochs = 100
# 图片大小
image_size = 224

4.3 数据增强

# 训练集数据进行数据增强
train_datagen = ImageDataGenerator(
    rotation_range=20,  # 随机旋转度数
    width_shift_range=0.1,  # 随机水平平移
    height_shift_range=0.1,  # 随机竖直平移
    rescale=1 / 255,  # 数据归一化
    shear_range=10,  # 随机错切变换
    zoom_range=0.1,  # 随机放大
    horizontal_flip=True,  # 水平翻转
    brightness_range=(0.7, 1.3),  # 亮度变化
    fill_mode='nearest',  # 填充方式
)
# 测试集数据只需要归一化就可以
test_datagen = ImageDataGenerator(
    rescale=1 / 255,  # 数据归一化
)

4.4 数据生成器

# 训练集数据生成器,可以在训练时自动产生数据进行训练
# 从'data/train'获得训练集数据
# 获得数据后会把图片resize为image_size×image_size的大小
# generator每次会产生batch_size个数据
train_generator = train_datagen.flow_from_directory(
    '../data/train',
    target_size=(image_size, image_size),
    batch_size=batch_size,
)

# 测试集数据生成器
test_generator = test_datagen.flow_from_directory(
    '../data/test',
    target_size=(image_size, image_size),
    batch_size=batch_size,
)
# 字典的键为17个文件夹的名字,值为对应的分类编号
print(train_generator.class_indices)

image-20220821160237742

4.5 ECA-Net注意力机制模块

'''
ECA模块的思想是非常简单的,它去除了原来SE模块中的全连接层,
直接在全局平均池化之后的特征上通过一个1D卷积进行学习。
'''
def eca_block(input_feature, b=1, gamma=2):
    channel = input_feature.shape[-1]
    # 根据公式计算自适应卷积核大小
    kernel_size = int(abs((math.log(channel, 2) + b) / gamma))
    # 如果是kernel_size是偶数,就使用,否则变成偶数
    kernel_size = kernel_size if kernel_size % 2 else kernel_size + 1
    # [c]
    x = GlobalAveragePooling2D()(input_feature)
    # [c,1]
    x = Reshape((-1, 1))(x)
    #
    x = Conv1D(1, kernel_size=kernel_size, padding="same",use_bias=False)(x)
    x = Activation('sigmoid')(x)
    # [c,1]=>[1,1,c]
    x = Reshape((1, 1, -1))(x)

    output = multiply([input_feature, x])
    return output

4.6 定义残差单元

# 定义残差单元
def block(x, filters, strides=1, conv_shortcut=True):
    # projection shortcut
    if conv_shortcut == True:
        shortcut = Conv2D(filters * 4, kernel_size=1, strides=strides, padding='valid')(x)
        # epsilon为BN公式中防止分母为零的值
        shortcut = BatchNormalization(epsilon=1.001e-5)(shortcut)
    else:
        # identity_shortcut
        shortcut = x
    # 3个卷积层
    x = Conv2D(filters=filters, kernel_size=1, strides=strides, padding='valid')(x)
    x = BatchNormalization(epsilon=1.001e-5)(x)
    x = Activation('relu')(x)

    x = Conv2D(filters=filters, kernel_size=3, strides=1, padding='same')(x)
    x = BatchNormalization(epsilon=1.001e-5)(x)
    x = Activation('relu')(x)

    x = Conv2D(filters=filters * 4, kernel_size=1, strides=1, padding='valid')(x)
    x = BatchNormalization(epsilon=1.001e-5)(x)

    # ECA-Net模块
    x = eca_block(x)

    x = Add()([x, shortcut])
    x = Activation('relu')(x)
    return x

4.7 堆叠残差单元

# 堆叠残差单元
def stack(x, filters, blocks, strides):
    x = block(x, filters, strides=strides)
    for i in range(blocks - 1):
        x = block(x, filters, conv_shortcut=False)
    return x

4.8 ECA-ResNet50网络搭建

# 定义ECA-ResNet50
inputs = Input(shape=(image_size, image_size, 3))
# 填充3圈0,填充后图像从224×224变成230×230
x = ZeroPadding2D((3, 3))(inputs)
x = Conv2D(filters=64, kernel_size=7, strides=2, padding='valid')(x)
x = BatchNormalization(epsilon=1.001e-5)(x)
x = Activation('relu')(x)
# 填充1圈0
x = ZeroPadding2D((1, 1))(x)
x = MaxPool2D(pool_size=3, strides=2, padding='valid')(x)
# 堆叠残差结构
# blocks表示堆叠数量
x = stack(x, filters=64, blocks=3, strides=1)
x = stack(x, filters=128, blocks=4, strides=2)
x = stack(x, filters=256, blocks=6, strides=2)
x = stack(x, filters=512, blocks=3, strides=2)
# 根据特征图大小进行平均池化,池化后得到2维数据
x = GlobalAvgPool2D()(x)
x = Dense(num_classes, activation='softmax')(x)
# 定义模型
model = Model(inputs=inputs, outputs=x)
model.summary()
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 224, 224, 3) 0                                            
__________________________________________________________________________________________________
zero_padding2d (ZeroPadding2D)  (None, 230, 230, 3)  0           input_1[0][0]                    
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 112, 112, 64) 9472        zero_padding2d[0][0]             
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 112, 112, 64) 256         conv2d[0][0]                     
__________________________________________________________________________________________________
activation (Activation)         (None, 112, 112, 64) 0           batch_normalization[0][0]        
__________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D (None, 114, 114, 64) 0           activation[0][0]                 
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 56, 56, 64)   0           zero_padding2d_1[0][0]           
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 56, 56, 64)   4160        max_pooling2d[0][0]              
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 56, 56, 64)   256         conv2d_2[0][0]                   
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 56, 56, 64)   0           batch_normalization_2[0][0]      
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 56, 56, 64)   36928       activation_1[0][0]               
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 56, 56, 64)   256         conv2d_3[0][0]                   
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 56, 56, 64)   0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 56, 56, 256)  16640       activation_2[0][0]               
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 56, 56, 256)  1024        conv2d_4[0][0]                   
__________________________________________________________________________________________________
global_average_pooling2d (Globa (None, 256)          0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
reshape (Reshape)               (None, 256, 1)       0           global_average_pooling2d[0][0]   
__________________________________________________________________________________________________
conv1d (Conv1D)                 (None, 256, 1)       5           reshape[0][0]                    
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 256, 1)       0           conv1d[0][0]                     
__________________________________________________________________________________________________
reshape_1 (Reshape)             (None, 1, 1, 256)    0           activation_3[0][0]               
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 56, 56, 256)  16640       max_pooling2d[0][0]              
__________________________________________________________________________________________________
multiply (Multiply)             (None, 56, 56, 256)  0           batch_normalization_4[0][0]      
                                                                 reshape_1[0][0]                  
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 56, 56, 256)  1024        conv2d_1[0][0]                   
__________________________________________________________________________________________________
add (Add)                       (None, 56, 56, 256)  0           multiply[0][0]                   
                                                                 batch_normalization_1[0][0]      
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 56, 56, 256)  0           add[0][0]                        
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 56, 56, 64)   16448       activation_4[0][0]               
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 56, 56, 64)   256         conv2d_5[0][0]                   
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 56, 56, 64)   0           batch_normalization_5[0][0]      
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 56, 56, 64)   36928       activation_5[0][0]               
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 56, 56, 64)   256         conv2d_6[0][0]                   
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 56, 56, 64)   0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 56, 56, 256)  16640       activation_6[0][0]               
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 56, 56, 256)  1024        conv2d_7[0][0]                   
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 256)          0           batch_normalization_7[0][0]      
__________________________________________________________________________________________________
reshape_2 (Reshape)             (None, 256, 1)       0           global_average_pooling2d_1[0][0] 
__________________________________________________________________________________________________
conv1d_1 (Conv1D)               (None, 256, 1)       5           reshape_2[0][0]                  
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 256, 1)       0           conv1d_1[0][0]                   
__________________________________________________________________________________________________
reshape_3 (Reshape)             (None, 1, 1, 256)    0           activation_7[0][0]               
__________________________________________________________________________________________________
multiply_1 (Multiply)           (None, 56, 56, 256)  0           batch_normalization_7[0][0]      
                                                                 reshape_3[0][0]                  
__________________________________________________________________________________________________
add_1 (Add)                     (None, 56, 56, 256)  0           multiply_1[0][0]                 
                                                                 activation_4[0][0]               
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 56, 56, 256)  0           add_1[0][0]                      
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 56, 56, 64)   16448       activation_8[0][0]               
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 56, 56, 64)   256         conv2d_8[0][0]                   
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 56, 56, 64)   0           batch_normalization_8[0][0]      
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 56, 56, 64)   36928       activation_9[0][0]               
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 56, 56, 64)   256         conv2d_9[0][0]                   
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 56, 56, 64)   0           batch_normalization_9[0][0]      
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 56, 56, 256)  16640       activation_10[0][0]              
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 56, 56, 256)  1024        conv2d_10[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_2 (Glo (None, 256)          0           batch_normalization_10[0][0]     
__________________________________________________________________________________________________
reshape_4 (Reshape)             (None, 256, 1)       0           global_average_pooling2d_2[0][0] 
__________________________________________________________________________________________________
conv1d_2 (Conv1D)               (None, 256, 1)       5           reshape_4[0][0]                  
__________________________________________________________________________________________________
activation_11 (Activation)      (None, 256, 1)       0           conv1d_2[0][0]                   
__________________________________________________________________________________________________
reshape_5 (Reshape)             (None, 1, 1, 256)    0           activation_11[0][0]              
__________________________________________________________________________________________________
multiply_2 (Multiply)           (None, 56, 56, 256)  0           batch_normalization_10[0][0]     
                                                                 reshape_5[0][0]                  
__________________________________________________________________________________________________
add_2 (Add)                     (None, 56, 56, 256)  0           multiply_2[0][0]                 
                                                                 activation_8[0][0]               
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 56, 56, 256)  0           add_2[0][0]                      
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 28, 28, 128)  32896       activation_12[0][0]              
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 28, 28, 128)  512         conv2d_12[0][0]                  
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 28, 28, 128)  0           batch_normalization_12[0][0]     
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 28, 28, 128)  147584      activation_13[0][0]              
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 28, 28, 128)  512         conv2d_13[0][0]                  
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 28, 28, 128)  0           batch_normalization_13[0][0]     
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 28, 28, 512)  66048       activation_14[0][0]              
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 28, 28, 512)  2048        conv2d_14[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_3 (Glo (None, 512)          0           batch_normalization_14[0][0]     
__________________________________________________________________________________________________
reshape_6 (Reshape)             (None, 512, 1)       0           global_average_pooling2d_3[0][0] 
__________________________________________________________________________________________________
conv1d_3 (Conv1D)               (None, 512, 1)       5           reshape_6[0][0]                  
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 512, 1)       0           conv1d_3[0][0]                   
__________________________________________________________________________________________________
reshape_7 (Reshape)             (None, 1, 1, 512)    0           activation_15[0][0]              
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 28, 28, 512)  131584      activation_12[0][0]              
__________________________________________________________________________________________________
multiply_3 (Multiply)           (None, 28, 28, 512)  0           batch_normalization_14[0][0]     
                                                                 reshape_7[0][0]                  
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 28, 28, 512)  2048        conv2d_11[0][0]                  
__________________________________________________________________________________________________
add_3 (Add)                     (None, 28, 28, 512)  0           multiply_3[0][0]                 
                                                                 batch_normalization_11[0][0]     
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 28, 28, 512)  0           add_3[0][0]                      
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 28, 28, 128)  65664       activation_16[0][0]              
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 28, 28, 128)  512         conv2d_15[0][0]                  
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 28, 28, 128)  0           batch_normalization_15[0][0]     
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 28, 28, 128)  147584      activation_17[0][0]              
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 28, 28, 128)  512         conv2d_16[0][0]                  
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 28, 28, 128)  0           batch_normalization_16[0][0]     
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 28, 28, 512)  66048       activation_18[0][0]              
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 28, 28, 512)  2048        conv2d_17[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_4 (Glo (None, 512)          0           batch_normalization_17[0][0]     
__________________________________________________________________________________________________
reshape_8 (Reshape)             (None, 512, 1)       0           global_average_pooling2d_4[0][0] 
__________________________________________________________________________________________________
conv1d_4 (Conv1D)               (None, 512, 1)       5           reshape_8[0][0]                  
__________________________________________________________________________________________________
activation_19 (Activation)      (None, 512, 1)       0           conv1d_4[0][0]                   
__________________________________________________________________________________________________
reshape_9 (Reshape)             (None, 1, 1, 512)    0           activation_19[0][0]              
__________________________________________________________________________________________________
multiply_4 (Multiply)           (None, 28, 28, 512)  0           batch_normalization_17[0][0]     
                                                                 reshape_9[0][0]                  
__________________________________________________________________________________________________
add_4 (Add)                     (None, 28, 28, 512)  0           multiply_4[0][0]                 
                                                                 activation_16[0][0]              
__________________________________________________________________________________________________
activation_20 (Activation)      (None, 28, 28, 512)  0           add_4[0][0]                      
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 28, 28, 128)  65664       activation_20[0][0]              
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 28, 28, 128)  512         conv2d_18[0][0]                  
__________________________________________________________________________________________________
activation_21 (Activation)      (None, 28, 28, 128)  0           batch_normalization_18[0][0]     
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, 28, 28, 128)  147584      activation_21[0][0]              
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 28, 28, 128)  512         conv2d_19[0][0]                  
__________________________________________________________________________________________________
activation_22 (Activation)      (None, 28, 28, 128)  0           batch_normalization_19[0][0]     
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, 28, 28, 512)  66048       activation_22[0][0]              
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 28, 28, 512)  2048        conv2d_20[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_5 (Glo (None, 512)          0           batch_normalization_20[0][0]     
__________________________________________________________________________________________________
reshape_10 (Reshape)            (None, 512, 1)       0           global_average_pooling2d_5[0][0] 
__________________________________________________________________________________________________
conv1d_5 (Conv1D)               (None, 512, 1)       5           reshape_10[0][0]                 
__________________________________________________________________________________________________
activation_23 (Activation)      (None, 512, 1)       0           conv1d_5[0][0]                   
__________________________________________________________________________________________________
reshape_11 (Reshape)            (None, 1, 1, 512)    0           activation_23[0][0]              
__________________________________________________________________________________________________
multiply_5 (Multiply)           (None, 28, 28, 512)  0           batch_normalization_20[0][0]     
                                                                 reshape_11[0][0]                 
__________________________________________________________________________________________________
add_5 (Add)                     (None, 28, 28, 512)  0           multiply_5[0][0]                 
                                                                 activation_20[0][0]              
__________________________________________________________________________________________________
activation_24 (Activation)      (None, 28, 28, 512)  0           add_5[0][0]                      
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (None, 28, 28, 128)  65664       activation_24[0][0]              
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 28, 28, 128)  512         conv2d_21[0][0]                  
__________________________________________________________________________________________________
activation_25 (Activation)      (None, 28, 28, 128)  0           batch_normalization_21[0][0]     
__________________________________________________________________________________________________
conv2d_22 (Conv2D)              (None, 28, 28, 128)  147584      activation_25[0][0]              
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 28, 28, 128)  512         conv2d_22[0][0]                  
__________________________________________________________________________________________________
activation_26 (Activation)      (None, 28, 28, 128)  0           batch_normalization_22[0][0]     
__________________________________________________________________________________________________
conv2d_23 (Conv2D)              (None, 28, 28, 512)  66048       activation_26[0][0]              
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, 28, 28, 512)  2048        conv2d_23[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_6 (Glo (None, 512)          0           batch_normalization_23[0][0]     
__________________________________________________________________________________________________
reshape_12 (Reshape)            (None, 512, 1)       0           global_average_pooling2d_6[0][0] 
__________________________________________________________________________________________________
conv1d_6 (Conv1D)               (None, 512, 1)       5           reshape_12[0][0]                 
__________________________________________________________________________________________________
activation_27 (Activation)      (None, 512, 1)       0           conv1d_6[0][0]                   
__________________________________________________________________________________________________
reshape_13 (Reshape)            (None, 1, 1, 512)    0           activation_27[0][0]              
__________________________________________________________________________________________________
multiply_6 (Multiply)           (None, 28, 28, 512)  0           batch_normalization_23[0][0]     
                                                                 reshape_13[0][0]                 
__________________________________________________________________________________________________
add_6 (Add)                     (None, 28, 28, 512)  0           multiply_6[0][0]                 
                                                                 activation_24[0][0]              
__________________________________________________________________________________________________
activation_28 (Activation)      (None, 28, 28, 512)  0           add_6[0][0]                      
__________________________________________________________________________________________________
conv2d_25 (Conv2D)              (None, 14, 14, 256)  131328      activation_28[0][0]              
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, 14, 14, 256)  1024        conv2d_25[0][0]                  
__________________________________________________________________________________________________
activation_29 (Activation)      (None, 14, 14, 256)  0           batch_normalization_25[0][0]     
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (None, 14, 14, 256)  590080      activation_29[0][0]              
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, 14, 14, 256)  1024        conv2d_26[0][0]                  
__________________________________________________________________________________________________
activation_30 (Activation)      (None, 14, 14, 256)  0           batch_normalization_26[0][0]     
__________________________________________________________________________________________________
conv2d_27 (Conv2D)              (None, 14, 14, 1024) 263168      activation_30[0][0]              
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, 14, 14, 1024) 4096        conv2d_27[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_7 (Glo (None, 1024)         0           batch_normalization_27[0][0]     
__________________________________________________________________________________________________
reshape_14 (Reshape)            (None, 1024, 1)      0           global_average_pooling2d_7[0][0] 
__________________________________________________________________________________________________
conv1d_7 (Conv1D)               (None, 1024, 1)      5           reshape_14[0][0]                 
__________________________________________________________________________________________________
activation_31 (Activation)      (None, 1024, 1)      0           conv1d_7[0][0]                   
__________________________________________________________________________________________________
reshape_15 (Reshape)            (None, 1, 1, 1024)   0           activation_31[0][0]              
__________________________________________________________________________________________________
conv2d_24 (Conv2D)              (None, 14, 14, 1024) 525312      activation_28[0][0]              
__________________________________________________________________________________________________
multiply_7 (Multiply)           (None, 14, 14, 1024) 0           batch_normalization_27[0][0]     
                                                                 reshape_15[0][0]                 
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, 14, 14, 1024) 4096        conv2d_24[0][0]                  
__________________________________________________________________________________________________
add_7 (Add)                     (None, 14, 14, 1024) 0           multiply_7[0][0]                 
                                                                 batch_normalization_24[0][0]     
__________________________________________________________________________________________________
activation_32 (Activation)      (None, 14, 14, 1024) 0           add_7[0][0]                      
__________________________________________________________________________________________________
conv2d_28 (Conv2D)              (None, 14, 14, 256)  262400      activation_32[0][0]              
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, 14, 14, 256)  1024        conv2d_28[0][0]                  
__________________________________________________________________________________________________
activation_33 (Activation)      (None, 14, 14, 256)  0           batch_normalization_28[0][0]     
__________________________________________________________________________________________________
conv2d_29 (Conv2D)              (None, 14, 14, 256)  590080      activation_33[0][0]              
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, 14, 14, 256)  1024        conv2d_29[0][0]                  
__________________________________________________________________________________________________
activation_34 (Activation)      (None, 14, 14, 256)  0           batch_normalization_29[0][0]     
__________________________________________________________________________________________________
conv2d_30 (Conv2D)              (None, 14, 14, 1024) 263168      activation_34[0][0]              
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, 14, 14, 1024) 4096        conv2d_30[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_8 (Glo (None, 1024)         0           batch_normalization_30[0][0]     
__________________________________________________________________________________________________
reshape_16 (Reshape)            (None, 1024, 1)      0           global_average_pooling2d_8[0][0] 
__________________________________________________________________________________________________
conv1d_8 (Conv1D)               (None, 1024, 1)      5           reshape_16[0][0]                 
__________________________________________________________________________________________________
activation_35 (Activation)      (None, 1024, 1)      0           conv1d_8[0][0]                   
__________________________________________________________________________________________________
reshape_17 (Reshape)            (None, 1, 1, 1024)   0           activation_35[0][0]              
__________________________________________________________________________________________________
multiply_8 (Multiply)           (None, 14, 14, 1024) 0           batch_normalization_30[0][0]     
                                                                 reshape_17[0][0]                 
__________________________________________________________________________________________________
add_8 (Add)                     (None, 14, 14, 1024) 0           multiply_8[0][0]                 
                                                                 activation_32[0][0]              
__________________________________________________________________________________________________
activation_36 (Activation)      (None, 14, 14, 1024) 0           add_8[0][0]                      
__________________________________________________________________________________________________
conv2d_31 (Conv2D)              (None, 14, 14, 256)  262400      activation_36[0][0]              
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, 14, 14, 256)  1024        conv2d_31[0][0]                  
__________________________________________________________________________________________________
activation_37 (Activation)      (None, 14, 14, 256)  0           batch_normalization_31[0][0]     
__________________________________________________________________________________________________
conv2d_32 (Conv2D)              (None, 14, 14, 256)  590080      activation_37[0][0]              
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, 14, 14, 256)  1024        conv2d_32[0][0]                  
__________________________________________________________________________________________________
activation_38 (Activation)      (None, 14, 14, 256)  0           batch_normalization_32[0][0]     
__________________________________________________________________________________________________
conv2d_33 (Conv2D)              (None, 14, 14, 1024) 263168      activation_38[0][0]              
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, 14, 14, 1024) 4096        conv2d_33[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_9 (Glo (None, 1024)         0           batch_normalization_33[0][0]     
__________________________________________________________________________________________________
reshape_18 (Reshape)            (None, 1024, 1)      0           global_average_pooling2d_9[0][0] 
__________________________________________________________________________________________________
conv1d_9 (Conv1D)               (None, 1024, 1)      5           reshape_18[0][0]                 
__________________________________________________________________________________________________
activation_39 (Activation)      (None, 1024, 1)      0           conv1d_9[0][0]                   
__________________________________________________________________________________________________
reshape_19 (Reshape)            (None, 1, 1, 1024)   0           activation_39[0][0]              
__________________________________________________________________________________________________
multiply_9 (Multiply)           (None, 14, 14, 1024) 0           batch_normalization_33[0][0]     
                                                                 reshape_19[0][0]                 
__________________________________________________________________________________________________
add_9 (Add)                     (None, 14, 14, 1024) 0           multiply_9[0][0]                 
                                                                 activation_36[0][0]              
__________________________________________________________________________________________________
activation_40 (Activation)      (None, 14, 14, 1024) 0           add_9[0][0]                      
__________________________________________________________________________________________________
conv2d_34 (Conv2D)              (None, 14, 14, 256)  262400      activation_40[0][0]              
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, 14, 14, 256)  1024        conv2d_34[0][0]                  
__________________________________________________________________________________________________
activation_41 (Activation)      (None, 14, 14, 256)  0           batch_normalization_34[0][0]     
__________________________________________________________________________________________________
conv2d_35 (Conv2D)              (None, 14, 14, 256)  590080      activation_41[0][0]              
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, 14, 14, 256)  1024        conv2d_35[0][0]                  
__________________________________________________________________________________________________
activation_42 (Activation)      (None, 14, 14, 256)  0           batch_normalization_35[0][0]     
__________________________________________________________________________________________________
conv2d_36 (Conv2D)              (None, 14, 14, 1024) 263168      activation_42[0][0]              
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, 14, 14, 1024) 4096        conv2d_36[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_10 (Gl (None, 1024)         0           batch_normalization_36[0][0]     
__________________________________________________________________________________________________
reshape_20 (Reshape)            (None, 1024, 1)      0           global_average_pooling2d_10[0][0]
__________________________________________________________________________________________________
conv1d_10 (Conv1D)              (None, 1024, 1)      5           reshape_20[0][0]                 
__________________________________________________________________________________________________
activation_43 (Activation)      (None, 1024, 1)      0           conv1d_10[0][0]                  
__________________________________________________________________________________________________
reshape_21 (Reshape)            (None, 1, 1, 1024)   0           activation_43[0][0]              
__________________________________________________________________________________________________
multiply_10 (Multiply)          (None, 14, 14, 1024) 0           batch_normalization_36[0][0]     
                                                                 reshape_21[0][0]                 
__________________________________________________________________________________________________
add_10 (Add)                    (None, 14, 14, 1024) 0           multiply_10[0][0]                
                                                                 activation_40[0][0]              
__________________________________________________________________________________________________
activation_44 (Activation)      (None, 14, 14, 1024) 0           add_10[0][0]                     
__________________________________________________________________________________________________
conv2d_37 (Conv2D)              (None, 14, 14, 256)  262400      activation_44[0][0]              
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, 14, 14, 256)  1024        conv2d_37[0][0]                  
__________________________________________________________________________________________________
activation_45 (Activation)      (None, 14, 14, 256)  0           batch_normalization_37[0][0]     
__________________________________________________________________________________________________
conv2d_38 (Conv2D)              (None, 14, 14, 256)  590080      activation_45[0][0]              
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, 14, 14, 256)  1024        conv2d_38[0][0]                  
__________________________________________________________________________________________________
activation_46 (Activation)      (None, 14, 14, 256)  0           batch_normalization_38[0][0]     
__________________________________________________________________________________________________
conv2d_39 (Conv2D)              (None, 14, 14, 1024) 263168      activation_46[0][0]              
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, 14, 14, 1024) 4096        conv2d_39[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_11 (Gl (None, 1024)         0           batch_normalization_39[0][0]     
__________________________________________________________________________________________________
reshape_22 (Reshape)            (None, 1024, 1)      0           global_average_pooling2d_11[0][0]
__________________________________________________________________________________________________
conv1d_11 (Conv1D)              (None, 1024, 1)      5           reshape_22[0][0]                 
__________________________________________________________________________________________________
activation_47 (Activation)      (None, 1024, 1)      0           conv1d_11[0][0]                  
__________________________________________________________________________________________________
reshape_23 (Reshape)            (None, 1, 1, 1024)   0           activation_47[0][0]              
__________________________________________________________________________________________________
multiply_11 (Multiply)          (None, 14, 14, 1024) 0           batch_normalization_39[0][0]     
                                                                 reshape_23[0][0]                 
__________________________________________________________________________________________________
add_11 (Add)                    (None, 14, 14, 1024) 0           multiply_11[0][0]                
                                                                 activation_44[0][0]              
__________________________________________________________________________________________________
activation_48 (Activation)      (None, 14, 14, 1024) 0           add_11[0][0]                     
__________________________________________________________________________________________________
conv2d_40 (Conv2D)              (None, 14, 14, 256)  262400      activation_48[0][0]              
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, 14, 14, 256)  1024        conv2d_40[0][0]                  
__________________________________________________________________________________________________
activation_49 (Activation)      (None, 14, 14, 256)  0           batch_normalization_40[0][0]     
__________________________________________________________________________________________________
conv2d_41 (Conv2D)              (None, 14, 14, 256)  590080      activation_49[0][0]              
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, 14, 14, 256)  1024        conv2d_41[0][0]                  
__________________________________________________________________________________________________
activation_50 (Activation)      (None, 14, 14, 256)  0           batch_normalization_41[0][0]     
__________________________________________________________________________________________________
conv2d_42 (Conv2D)              (None, 14, 14, 1024) 263168      activation_50[0][0]              
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, 14, 14, 1024) 4096        conv2d_42[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_12 (Gl (None, 1024)         0           batch_normalization_42[0][0]     
__________________________________________________________________________________________________
reshape_24 (Reshape)            (None, 1024, 1)      0           global_average_pooling2d_12[0][0]
__________________________________________________________________________________________________
conv1d_12 (Conv1D)              (None, 1024, 1)      5           reshape_24[0][0]                 
__________________________________________________________________________________________________
activation_51 (Activation)      (None, 1024, 1)      0           conv1d_12[0][0]                  
__________________________________________________________________________________________________
reshape_25 (Reshape)            (None, 1, 1, 1024)   0           activation_51[0][0]              
__________________________________________________________________________________________________
multiply_12 (Multiply)          (None, 14, 14, 1024) 0           batch_normalization_42[0][0]     
                                                                 reshape_25[0][0]                 
__________________________________________________________________________________________________
add_12 (Add)                    (None, 14, 14, 1024) 0           multiply_12[0][0]                
                                                                 activation_48[0][0]              
__________________________________________________________________________________________________
activation_52 (Activation)      (None, 14, 14, 1024) 0           add_12[0][0]                     
__________________________________________________________________________________________________
conv2d_44 (Conv2D)              (None, 7, 7, 512)    524800      activation_52[0][0]              
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, 7, 7, 512)    2048        conv2d_44[0][0]                  
__________________________________________________________________________________________________
activation_53 (Activation)      (None, 7, 7, 512)    0           batch_normalization_44[0][0]     
__________________________________________________________________________________________________
conv2d_45 (Conv2D)              (None, 7, 7, 512)    2359808     activation_53[0][0]              
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, 7, 7, 512)    2048        conv2d_45[0][0]                  
__________________________________________________________________________________________________
activation_54 (Activation)      (None, 7, 7, 512)    0           batch_normalization_45[0][0]     
__________________________________________________________________________________________________
conv2d_46 (Conv2D)              (None, 7, 7, 2048)   1050624     activation_54[0][0]              
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, 7, 7, 2048)   8192        conv2d_46[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_13 (Gl (None, 2048)         0           batch_normalization_46[0][0]     
__________________________________________________________________________________________________
reshape_26 (Reshape)            (None, 2048, 1)      0           global_average_pooling2d_13[0][0]
__________________________________________________________________________________________________
conv1d_13 (Conv1D)              (None, 2048, 1)      7           reshape_26[0][0]                 
__________________________________________________________________________________________________
activation_55 (Activation)      (None, 2048, 1)      0           conv1d_13[0][0]                  
__________________________________________________________________________________________________
reshape_27 (Reshape)            (None, 1, 1, 2048)   0           activation_55[0][0]              
__________________________________________________________________________________________________
conv2d_43 (Conv2D)              (None, 7, 7, 2048)   2099200     activation_52[0][0]              
__________________________________________________________________________________________________
multiply_13 (Multiply)          (None, 7, 7, 2048)   0           batch_normalization_46[0][0]     
                                                                 reshape_27[0][0]                 
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, 7, 7, 2048)   8192        conv2d_43[0][0]                  
__________________________________________________________________________________________________
add_13 (Add)                    (None, 7, 7, 2048)   0           multiply_13[0][0]                
                                                                 batch_normalization_43[0][0]     
__________________________________________________________________________________________________
activation_56 (Activation)      (None, 7, 7, 2048)   0           add_13[0][0]                     
__________________________________________________________________________________________________
conv2d_47 (Conv2D)              (None, 7, 7, 512)    1049088     activation_56[0][0]              
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, 7, 7, 512)    2048        conv2d_47[0][0]                  
__________________________________________________________________________________________________
activation_57 (Activation)      (None, 7, 7, 512)    0           batch_normalization_47[0][0]     
__________________________________________________________________________________________________
conv2d_48 (Conv2D)              (None, 7, 7, 512)    2359808     activation_57[0][0]              
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, 7, 7, 512)    2048        conv2d_48[0][0]                  
__________________________________________________________________________________________________
activation_58 (Activation)      (None, 7, 7, 512)    0           batch_normalization_48[0][0]     
__________________________________________________________________________________________________
conv2d_49 (Conv2D)              (None, 7, 7, 2048)   1050624     activation_58[0][0]              
__________________________________________________________________________________________________
batch_normalization_49 (BatchNo (None, 7, 7, 2048)   8192        conv2d_49[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_14 (Gl (None, 2048)         0           batch_normalization_49[0][0]     
__________________________________________________________________________________________________
reshape_28 (Reshape)            (None, 2048, 1)      0           global_average_pooling2d_14[0][0]
__________________________________________________________________________________________________
conv1d_14 (Conv1D)              (None, 2048, 1)      7           reshape_28[0][0]                 
__________________________________________________________________________________________________
activation_59 (Activation)      (None, 2048, 1)      0           conv1d_14[0][0]                  
__________________________________________________________________________________________________
reshape_29 (Reshape)            (None, 1, 1, 2048)   0           activation_59[0][0]              
__________________________________________________________________________________________________
multiply_14 (Multiply)          (None, 7, 7, 2048)   0           batch_normalization_49[0][0]     
                                                                 reshape_29[0][0]                 
__________________________________________________________________________________________________
add_14 (Add)                    (None, 7, 7, 2048)   0           multiply_14[0][0]                
                                                                 activation_56[0][0]              
__________________________________________________________________________________________________
activation_60 (Activation)      (None, 7, 7, 2048)   0           add_14[0][0]                     
__________________________________________________________________________________________________
conv2d_50 (Conv2D)              (None, 7, 7, 512)    1049088     activation_60[0][0]              
__________________________________________________________________________________________________
batch_normalization_50 (BatchNo (None, 7, 7, 512)    2048        conv2d_50[0][0]                  
__________________________________________________________________________________________________
activation_61 (Activation)      (None, 7, 7, 512)    0           batch_normalization_50[0][0]     
__________________________________________________________________________________________________
conv2d_51 (Conv2D)              (None, 7, 7, 512)    2359808     activation_61[0][0]              
__________________________________________________________________________________________________
batch_normalization_51 (BatchNo (None, 7, 7, 512)    2048        conv2d_51[0][0]                  
__________________________________________________________________________________________________
activation_62 (Activation)      (None, 7, 7, 512)    0           batch_normalization_51[0][0]     
__________________________________________________________________________________________________
conv2d_52 (Conv2D)              (None, 7, 7, 2048)   1050624     activation_62[0][0]              
__________________________________________________________________________________________________
batch_normalization_52 (BatchNo (None, 7, 7, 2048)   8192        conv2d_52[0][0]                  
__________________________________________________________________________________________________
global_average_pooling2d_15 (Gl (None, 2048)         0           batch_normalization_52[0][0]     
__________________________________________________________________________________________________
reshape_30 (Reshape)            (None, 2048, 1)      0           global_average_pooling2d_15[0][0]
__________________________________________________________________________________________________
conv1d_15 (Conv1D)              (None, 2048, 1)      7           reshape_30[0][0]                 
__________________________________________________________________________________________________
activation_63 (Activation)      (None, 2048, 1)      0           conv1d_15[0][0]                  
__________________________________________________________________________________________________
reshape_31 (Reshape)            (None, 1, 1, 2048)   0           activation_63[0][0]              
__________________________________________________________________________________________________
multiply_15 (Multiply)          (None, 7, 7, 2048)   0           batch_normalization_52[0][0]     
                                                                 reshape_31[0][0]                 
__________________________________________________________________________________________________
add_15 (Add)                    (None, 7, 7, 2048)   0           multiply_15[0][0]                
                                                                 activation_60[0][0]              
__________________________________________________________________________________________________
activation_64 (Activation)      (None, 7, 7, 2048)   0           add_15[0][0]                     
__________________________________________________________________________________________________
global_average_pooling2d_16 (Gl (None, 2048)         0           activation_64[0][0]              
__________________________________________________________________________________________________
dense (Dense)                   (None, 17)           34833       global_average_pooling2d_16[0][0]
==================================================================================================
Total params: 23,622,631
Trainable params: 23,569,511
Non-trainable params: 53,120

4.9 callback设置

# 学习率调节函数,逐渐减小学习率
def adjust_learning_rate(epoch):
    # 前40周期
    if epoch<=40:
        lr = 1e-4
    # 前40到80周期
    elif epoch>40 and epoch<=80:
        lr = 1e-5
    # 80到100周期
    else:
        lr = 1e-6
    return lr

# 定义优化器
adam = Adam(lr=1e-4)

# 读取模型
checkpoint_save_path = "./checkpoint/ECA-ResNet-50.ckpt"
if os.path.exists(checkpoint_save_path + '.index'):
    print('-------------load the model-----------------')
    model.load_weights(checkpoint_save_path)
# 保存模型
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,
                                                 save_weights_only=True,
                                                 save_best_only=True)

# 定义学习率衰减策略
callbacks = []
callbacks.append(LearningRateScheduler(adjust_learning_rate))
callbacks.append(cp_callback)

4.10 模型训练

# 定义优化器,loss function,训练过程中计算准确率
model.compile(optimizer=adam,loss='categorical_crossentropy',metrics=['accuracy'])

# Tensorflow2.1版本(包括2.1)之后可以直接使用fit训练模型
history = model.fit(x=train_generator,epochs=epochs,validation_data=test_generator,callbacks=callbacks)

image-20220821160539111

4.11 可视化

  acc可视化

# 画出训练集准确率曲线图
plt.plot(np.arange(epochs),history.history['accuracy'],c='b',label='train_accuracy')
# 画出验证集准确率曲线图
plt.plot(np.arange(epochs),history.history['val_accuracy'],c='y',label='val_accuracy')
# 图例
plt.legend()
# x坐标描述
plt.xlabel('epochs')
# y坐标描述
plt.ylabel('accuracy')
# 显示图像
plt.show()

image-20220821160609417

  loss可视化

# 画出训练集loss曲线图
plt.plot(np.arange(epochs),history.history['loss'],c='b',label='train_loss')
# 画出验证集loss曲线图
plt.plot(np.arange(epochs),history.history['val_loss'],c='y',label='val_loss')
# 图例
plt.legend()
# x坐标描述
plt.xlabel('epochs')
# y坐标描述
plt.ylabel('loss')
# 显示图像
plt.show()

image-20220821160629678

4.11 ResNet50、SE-ResNet50、ECA-ResNet50比较

  ResNet50的训练结果:

image-20220821160720966

image-20220821160734629

  SE-ResNet50训练结果:

image-20220821161231581

image-20220821161241736

  ECA-ResNet50的训练结果

image-20220821160757510

image-20220821160807582

  在准确率上,ECA-ResNet50比不用注意力机制的ResNet50准确高了6.25%,比SE-ResNet50高了1.1%(只是针对我目前这个数据集,况且这个数据集也比较小,不要拿这个作为标准,原ECANet的论文中是用ImageNet和COCO数据集比较的)。

# References

Wang Q , Wu B , Zhu P , et al. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020.

SENet架构-通道注意力机制

ResNet架构解析

神经网络学习小记录65——Tensorflow2 图像处理中注意力机制的解析与代码详解

CNN中的通道注意力机制(SEnet、ECAnet),附Tensorflow完整代码

目录
相关文章
|
18天前
|
机器学习/深度学习 TensorFlow 算法框架/工具
PYTHON TENSORFLOW 2二维卷积神经网络CNN对图像物体识别混淆矩阵评估|数据分享
PYTHON TENSORFLOW 2二维卷积神经网络CNN对图像物体识别混淆矩阵评估|数据分享
|
16天前
|
机器学习/深度学习 存储 监控
数据分享|Python卷积神经网络CNN身份识别图像处理在疫情防控下口罩识别、人脸识别
数据分享|Python卷积神经网络CNN身份识别图像处理在疫情防控下口罩识别、人脸识别
|
19天前
|
数据可视化
R语言弹性网络Elastic Net正则化惩罚回归模型交叉验证可视化
R语言弹性网络Elastic Net正则化惩罚回归模型交叉验证可视化
|
18天前
|
机器学习/深度学习 存储 计算机视觉
r语言中对LASSO回归,Ridge岭回归和弹性网络Elastic Net模型实现-4
r语言中对LASSO回归,Ridge岭回归和弹性网络Elastic Net模型实现
|
3天前
|
机器学习/深度学习 PyTorch 算法框架/工具
使用Python实现卷积神经网络(CNN)
使用Python实现卷积神经网络(CNN)的博客教程
23 1
|
7天前
|
机器学习/深度学习 并行计算 测试技术
BiTCN:基于卷积网络的多元时间序列预测
该文探讨了时间序列预测中模型架构的选择,指出尽管MLP和Transformer模型常见,但CNN在预测领域的应用较少。BiTCN是一种利用两个时间卷积网络来编码历史和未来协变量的模型,提出于《Parameter-efficient deep probabilistic forecasting》(2023年3月)。它包含多个由扩张卷积、GELU激活函数、dropout和全连接层组成的临时块,有效地处理序列数据。实验表明,BiTCN在具有外生特征的预测任务中表现优于N-HiTS和PatchTST。BiTCN的效率和性能展示了CNN在时间序列预测中的潜力。
20 1
|
8天前
|
机器学习/深度学习 人工智能 算法
【AI 初识】什么是卷积神经网络 (CNN)?
【5月更文挑战第2天】【AI 初识】什么是卷积神经网络 (CNN)?
|
9天前
|
机器学习/深度学习 自然语言处理 搜索推荐
|
11天前
|
机器学习/深度学习 PyTorch TensorFlow
【Python机器学习专栏】卷积神经网络(CNN)的原理与应用
【4月更文挑战第30天】本文介绍了卷积神经网络(CNN)的基本原理和结构组成,包括卷积层、激活函数、池化层和全连接层。CNN在图像识别等领域表现出色,其层次结构能逐步提取特征。在Python中,可利用TensorFlow或PyTorch构建CNN模型,示例代码展示了使用TensorFlow Keras API创建简单CNN的过程。CNN作为强大深度学习模型,未来仍有广阔发展空间。
|
13天前
|
机器学习/深度学习 算法 TensorFlow
TensorFlow 2keras开发深度学习模型实例:多层感知器(MLP),卷积神经网络(CNN)和递归神经网络(RNN)
TensorFlow 2keras开发深度学习模型实例:多层感知器(MLP),卷积神经网络(CNN)和递归神经网络(RNN)