tebsorflow2.0 tf.keras猫狗识别(1)(下)

简介: 1. 简单介绍本文的应用场景是二分类问题,采用的数据集为猫狗分类数据集,为了减少训练时间,训练集图片有2123张,验证集有909 张图片,测试的图片有1000张,分为猫和狗两个类别,图片已经放置在dc_2000文件夹下面。

4.2 构建模型并训练

model = tf.keras.Sequential()   #顺序模型
model.add(tf.keras.layers.Conv2D(64, (3, 3), input_shape=(200, 200, 3), activation='relu'))
model.add(tf.keras.layers.Conv2D(64, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Conv2D(128, (3, 3), activation='relu'))
model.add(tf.keras.layers.Conv2D(128, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Conv2D(256, (3, 3), activation='relu'))
model.add(tf.keras.layers.Conv2D(256, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Conv2D(512, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Conv2D(512, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Conv2D(1024, (3, 3), activation='relu'))
model.add(tf.keras.layers.GlobalAveragePooling2D())
model.add(tf.keras.layers.Dense(1024, activation='relu'))
model.add(tf.keras.layers.Dense(256, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
#%%
model.summary()
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0001),
              loss='binary_crossentropy',
              metrics=['acc'])
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 198, 198, 64)      1792      
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 196, 196, 64)      36928     
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 98, 98, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 96, 96, 128)       73856     
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 94, 94, 128)       147584    
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 47, 47, 128)       0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 45, 45, 256)       295168    
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 43, 43, 256)       590080    
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 21, 21, 256)       0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 19, 19, 512)       1180160   
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 9, 9, 512)         0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 7, 7, 512)         2359808   
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 3, 3, 512)         0         
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 1, 1, 1024)        4719616   
_________________________________________________________________
global_average_pooling2d (Gl (None, 1024)              0         
_________________________________________________________________
dense (Dense)                (None, 1024)              1049600   
_________________________________________________________________
dense_1 (Dense)              (None, 256)               262400    
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 257       
=================================================================
Total params: 10,717,249
Trainable params: 10,717,249
Non-trainable params: 0
_________________________________________________________________
steps_per_eooch = train_count//batch_size
validation_steps = test_count//batch_size
history = model.fit(train_ds,epochs=100,steps_per_epoch=steps_per_eooch,validation_data=test_ds,validation_steps=validation_steps)
Epoch 1/100
66/66 [==============================] - 11s 171ms/step - loss: 0.6936 - acc: 0.5208 - val_loss: 0.6931 - val_acc: 0.4833
Epoch 2/100
66/66 [==============================] - 11s 169ms/step - loss: 0.6902 - acc: 0.5312 - val_loss: 0.6919 - val_acc: 0.5502
Epoch 3/100
66/66 [==============================] - 11s 168ms/step - loss: 0.6851 - acc: 0.5611 - val_loss: 0.6799 - val_acc: 0.6105
Epoch 4/100
66/66 [==============================] - 17s 254ms/step - loss: 0.6650 - acc: 0.6080 - val_loss: 0.6568 - val_acc: 0.6228
Epoch 5/100
66/66 [==============================] - 17s 257ms/step - loss: 0.6341 - acc: 0.6501 - val_loss: 0.6466 - val_acc: 0.6395
.......
Epoch 98/100
66/66 [==============================] - 17s 256ms/step - loss: 4.7811e-08 - acc: 1.0000 - val_loss: 3.8404 - val_acc: 0.7422
Epoch 99/100
66/66 [==============================] - 17s 259ms/step - loss: 5.1801e-08 - acc: 1.0000 - val_loss: 3.8481 - val_acc: 0.7433
Epoch 100/100
66/66 [==============================] - 17s 256ms/step - loss: 4.4815e-08 - acc: 1.0000 - val_loss: 3.8532 - val_acc: 0.7444

4.3 分析评估

我们对比一下在训练集和验证集的准确度和损失的变化曲线,我们可以发现,本网络有些过拟合。

history.history.keys()
plt.plot(history.epoch, history.history.get('acc'), label='acc')
plt.plot(history.epoch, history.history.get('val_acc'), label='val_acc')
plt.legend()

plt.plot(history.epoch, history.history.get('loss'), label='loss')
plt.plot(history.epoch, history.history.get('val_loss'), label='val_loss')
plt.legend()

最后评估一下在测试集上的误差

#test_ds = test_data.batch(batch_size)
loss,acc = model.evaluate(test_data.batch(batch_size))
32/32 [==============================] - 4s 111ms/step - loss: 3.8820 - acc: 0.7300

总结一下:本测试的结果有些过拟合,考虑到图片较少,不能提取更为高级的特征,所以在测试集的效果只有73%,后期我们可以通过增加数据集,数据增强,并引入正则化和丢弃法等方法来抑制过拟合并提高精度。

相关文章
|
4月前
|
机器学习/深度学习 PyTorch 算法框架/工具
Pytorch使用VGG16模型进行预测猫狗二分类
深度学习已经在计算机视觉领域取得了巨大的成功,特别是在图像分类任务中。VGG16是深度学习中经典的卷积神经网络(Convolutional Neural Network,CNN)之一,由牛津大学的Karen Simonyan和Andrew Zisserman在2014年提出。VGG16网络以其深度和简洁性而闻名,是图像分类中的重要里程碑。
282 0
|
11月前
|
机器学习/深度学习 算法框架/工具 计算机视觉
【深度学习】实验11 使用Keras预训练模型完成猫狗识别
【深度学习】实验11 使用Keras预训练模型完成猫狗识别
124 0
|
算法 数据挖掘 TensorFlow
tensorflow+k-means聚类 简单实现猫狗图像分类
利用k-means聚类实现数据集的分类
351 0
tensorflow+k-means聚类 简单实现猫狗图像分类
|
TensorFlow 算法框架/工具
keras和tensorflow猫狗图像分类
keras和tensorflow猫狗图像分类
|
机器学习/深度学习 人工智能 算法框架/工具
基于卷积神经网络(CNN)的猫狗识别
基于卷积神经网络(CNN)的猫狗识别
635 0
基于卷积神经网络(CNN)的猫狗识别
|
数据采集 存储 数据可视化
|
机器学习/深度学习 存储 并行计算
使用 CNN 进行猫狗分类
在实践中,对猫和狗进行分类可能有些不必要。但对我来说,它实际上是学习神经网络的一个很好的起点。在本文中,我将分享我执行分类任务的方法。可以通过访问要使用的数据集。
252 0
|
机器学习/深度学习 数据采集 编解码
从零开始学keras(七)之kaggle猫狗分类器
从零开始学keras(七)之kaggle猫狗分类器
从零开始学keras(七)之kaggle猫狗分类器
|
机器学习/深度学习 缓存 计算机视觉
基于 PaddlePaddle2.x LeNet网络的猫狗分类
基于 PaddlePaddle2.x LeNet网络的猫狗分类
204 0
基于 PaddlePaddle2.x LeNet网络的猫狗分类