CV之CNN:基于tensorflow框架采用CNN(改进的AlexNet,训练/评估/推理)卷积神经网络算法实现猫狗图像分类识别

简介: CV之CNN:基于tensorflow框架采用CNN(改进的AlexNet,训练/评估/推理)卷积神经网络算法实现猫狗图像分类识别


目录

基于tensorflow框架采用CNN(改进的AlexNet,训练/评估/推理)卷积神经网络算法实现猫狗图像分类识别

数据集介绍

输出结果

使用model.ckpt-6000模型预测

预测错误的只有一个案例,如下所示

训练结果

核心代码


基于tensorflow框架采用CNN(改进的AlexNet,训练/评估/推理)卷积神经网络算法实现猫狗图像分类识别

数据集介绍

数据下载Dogs vs. Cats Redux: Kernels Edition | Kaggle

    train文件夹里有25000张狗和猫的图片。这个文件夹中的每个图像都有标签作为文件名的一部分。测试文件夹包含12500张图片,根据数字id命名。对于测试集中的每个图像,您应该预测图像是一只狗的概率(1 =狗,0 =猫)。

输出结果

使用model.ckpt-6000模型预测

预测错误的只有一个案例,如下所示

序号 使用model.ckpt-4000模型预测 使用model.ckpt-6000模型预测 使用model.ckpt-8000模型预测 使用model.ckpt-10000模型预测 使用model.ckpt-12000模型预测
1 cat cat (1).jpg 猫的概率 0.631 cat (1).jpg 狗的概率 0.740 cat (1).jpg 狗的概率 0.781 cat (1).jpg 狗的概率 0.976 cat (1).jpg 狗的概率 0.991
2 cat (10).jpg 狗的概率 0.697 cat (10).jpg 猫的概率 0.566 cat (10).jpg 猫的概率 0.925 cat (10).jpg 猫的概率 0.925 cat (10).jpg 猫的概率 0.816
3 cat (11).jpg 猫的概率 0.927 cat (11).jpg 猫的概率 0.988 cat (11).jpg 猫的概率 1.000 cat (11).jpg 猫的概率 1.000 cat (11).jpg 猫的概率 1.000
4 cat (12).jpg 狗的概率 0.746 cat (12).jpg 狗的概率 0.723 cat (12).jpg 狗的概率 0.822 cat (12).jpg 狗的概率 0.998 cat (12).jpg 狗的概率 1.000
5 cat (13).jpg 猫的概率 0.933 cat (13).jpg 猫的概率 0.983 cat (13).jpg 猫的概率 0.997 cat (13).jpg 猫的概率 1.000 cat (13).jpg 猫的概率 1.000
6 cat (14).jpg 狗的概率 0.657 cat (14).jpg 猫的概率 0.597 cat (14).jpg 狗的概率 0.758 cat (14).jpg 狗的概率 0.695 cat (14).jpg 猫的概率 0.544
7 cat (15).jpg 狗的概率 0.578 cat (15).jpg 狗的概率 0.535 cat (15).jpg 狗的概率 0.526 cat (15).jpg 狗的概率 0.750 cat (15).jpg 狗的概率 0.569
8 cat (2).jpg 猫的概率 0.649 cat (2).jpg 猫的概率 0.637 cat (2).jpg 猫的概率 0.844 cat (2).jpg 猫的概率 0.996 cat (2).jpg 猫的概率 0.998
9 cat (3).jpg 狗的概率 0.668 cat (3).jpg 猫的概率 0.538 cat (3).jpg 猫的概率 0.710 cat (3).jpg 猫的概率 0.968 cat (3).jpg 猫的概率 0.995
10 cat (4).jpg 狗的概率 0.856 cat (4).jpg 狗的概率 0.780 cat (4).jpg 狗的概率 0.831 cat (4).jpg 狗的概率 0.974 cat (4).jpg 狗的概率 0.976
11 cat (5).jpg 猫的概率 0.812 cat (5).jpg 猫的概率 0.776 cat (5).jpg 猫的概率 0.505 cat (5).jpg 猫的概率 0.732 cat (5).jpg 狗的概率 0.608
12 cat (6).jpg 猫的概率 0.524 cat (6).jpg 狗的概率 0.661 cat (6).jpg 狗的概率 0.748 cat (6).jpg 狗的概率 0.970 cat (6).jpg 狗的概率 0.987
13 cat (7).jpg 狗的概率 0.612 cat (7).jpg 猫的概率 0.845 cat (7).jpg 猫的概率 0.894 cat (7).jpg 猫的概率 0.987 cat (7).jpg 猫的概率 0.728
14 cat (8).jpg 狗的概率 0.823 cat (8).jpg 狗的概率 0.948 cat (8).jpg 狗的概率 0.920 cat (8).jpg 狗的概率 0.982 cat (8).jpg 狗的概率 0.999
15 cat (9).jpg 猫的概率 0.697 cat (9).jpg 猫的概率 0.704 cat (9).jpg 狗的概率 0.819 cat (9).jpg 猫的概率 0.930 cat (9).jpg 狗的概率 0.718
16 dog dog (1).jpg 狗的概率 0.987 dog (1).jpg 狗的概率 0.995 dog (1).jpg 狗的概率 0.999 dog (1).jpg 狗的概率 1.000 dog (1).jpg 狗的概率 1.000
17 dog (10).jpg 狗的概率 0.628 dog (10).jpg 猫的概率 0.629 dog (10).jpg 猫的概率 0.994 dog (10).jpg 猫的概率 1.000 dog (10).jpg 猫的概率 1.000
18 dog (11).jpg 狗的概率 0.804 dog (11).jpg 狗的概率 0.879 dog (11).jpg 狗的概率 0.993 dog (11).jpg 狗的概率 1.000 dog (11).jpg 狗的概率 1.000
19 dog (12).jpg 猫的概率 0.704 dog (12).jpg 猫的概率 0.758 dog (12).jpg 狗的概率 0.503 dog (12).jpg 狗的概率 0.653 dog (12).jpg 猫的概率 0.985
20 dog (13).jpg 狗的概率 0.987 dog (13).jpg 狗的概率 0.997 dog (13).jpg 狗的概率 1.000 dog (13).jpg 狗的概率 1.000 dog (13).jpg 狗的概率 1.000
21 dog (14).jpg 狗的概率 0.815 dog (14).jpg 狗的概率 0.844 dog (14).jpg 狗的概率 0.904 dog (14).jpg 狗的概率 0.996 dog (14).jpg 狗的概率 0.950
22 dog (15).jpg 狗的概率 0.917 dog (15).jpg 狗的概率 0.984 dog (15).jpg 狗的概率 0.999 dog (15).jpg 狗的概率 1.000 dog (15).jpg 狗的概率 1.000
23 dog (16).jpg 狗的概率 0.883 dog (16).jpg 狗的概率 0.931 dog (16).jpg 狗的概率 0.830 dog (16).jpg 狗的概率 0.975 dog (16).jpg 狗的概率 0.983
24 dog (2).jpg 狗的概率 0.934 dog (2).jpg 狗的概率 0.982 dog (2).jpg 狗的概率 0.998 dog (2).jpg 狗的概率 1.000 dog (2).jpg 狗的概率 1.000
25 dog (3).jpg 狗的概率 0.993 dog (3).jpg 狗的概率 1.000 dog (3).jpg 狗的概率 1.000 dog (3).jpg 狗的概率 1.000 dog (3).jpg 狗的概率 1.000
26 dog (4).jpg 狗的概率 0.693 dog (4).jpg 狗的概率 0.754 dog (4).jpg 狗的概率 0.976 dog (4).jpg 狗的概率 0.515 dog (4).jpg 狗的概率 0.995
27 dog (5).jpg 狗的概率 0.916 dog (5).jpg 狗的概率 0.976 dog (5).jpg 狗的概率 0.993 dog (5).jpg 狗的概率 0.998 dog (5).jpg 狗的概率 1.000
28 dog (6).jpg 狗的概率 0.947 dog (6).jpg 狗的概率 0.989 dog (6).jpg 狗的概率 0.999 dog (6).jpg 狗的概率 1.000 dog (6).jpg 狗的概率 1.000
29 dog (7).jpg 猫的概率 0.526 dog (7).jpg 猫的概率 0.685 dog (7).jpg 猫的概率 0.961 dog (7).jpg 猫的概率 1.000 dog (7).jpg 猫的概率 1.000
30 dog (8).jpg 狗的概率 0.981 dog (8).jpg 狗的概率 0.998 dog (8).jpg 狗的概率 1.000 dog (8).jpg 狗的概率 1.000 dog (8).jpg 狗的概率 1.000
31 dog (9).jpg 狗的概率 0.899 dog (9).jpg 狗的概率 0.983 dog (9).jpg 狗的概率 0.999 dog (9).jpg 狗的概率 1.000 dog (9).jpg 狗的概率 1.000

训练结果

1. Step 0, train loss = 0.69, train accuracy = 78.12%
2. Step 50, train loss = 0.69, train accuracy = 43.75%
3. Step 100, train loss = 0.70, train accuracy = 46.88%
4. Step 150, train loss = 0.65, train accuracy = 75.00%
5. Step 200, train loss = 0.66, train accuracy = 59.38%
6. Step 250, train loss = 0.66, train accuracy = 62.50%
7. Step 300, train loss = 0.72, train accuracy = 40.62%
8. Step 350, train loss = 0.66, train accuracy = 62.50%
9. Step 400, train loss = 0.58, train accuracy = 68.75%
10. Step 450, train loss = 0.70, train accuracy = 65.62%
11. Step 500, train loss = 0.68, train accuracy = 56.25%
12. Step 550, train loss = 0.51, train accuracy = 81.25%
13. Step 600, train loss = 0.54, train accuracy = 75.00%
14. Step 650, train loss = 0.64, train accuracy = 68.75%
15. Step 700, train loss = 0.69, train accuracy = 53.12%
16. Step 750, train loss = 0.57, train accuracy = 71.88%
17. Step 800, train loss = 0.80, train accuracy = 50.00%
18. Step 850, train loss = 0.62, train accuracy = 59.38%
19. Step 900, train loss = 0.59, train accuracy = 65.62%
20. Step 950, train loss = 0.54, train accuracy = 71.88%
21. Step 1000, train loss = 0.57, train accuracy = 68.75%
22. Step 1050, train loss = 0.56, train accuracy = 78.12%
23. Step 1100, train loss = 0.66, train accuracy = 59.38%
24. Step 1150, train loss = 0.50, train accuracy = 84.38%
25. Step 1200, train loss = 0.46, train accuracy = 81.25%
26. Step 1250, train loss = 0.57, train accuracy = 59.38%
27. Step 1300, train loss = 0.37, train accuracy = 81.25%
28. Step 1350, train loss = 0.64, train accuracy = 62.50%
29. Step 1400, train loss = 0.44, train accuracy = 81.25%
30. Step 1450, train loss = 0.46, train accuracy = 84.38%
31. Step 1500, train loss = 0.50, train accuracy = 71.88%
32. Step 1550, train loss = 0.58, train accuracy = 62.50%
33. Step 1600, train loss = 0.43, train accuracy = 75.00%
34. Step 1650, train loss = 0.55, train accuracy = 71.88%
35. Step 1700, train loss = 0.50, train accuracy = 71.88%
36. Step 1750, train loss = 0.46, train accuracy = 75.00%
37. Step 1800, train loss = 0.81, train accuracy = 53.12%
38. Step 1850, train loss = 0.41, train accuracy = 90.62%
39. Step 1900, train loss = 0.65, train accuracy = 68.75%
40. Step 1950, train loss = 0.37, train accuracy = 84.38%
41. Step 2000, train loss = 0.39, train accuracy = 81.25%
42. Step 2050, train loss = 0.45, train accuracy = 84.38%
43. Step 2100, train loss = 0.44, train accuracy = 78.12%
44. Step 2150, train loss = 0.59, train accuracy = 65.62%
45. Step 2200, train loss = 0.51, train accuracy = 78.12%
46. Step 2250, train loss = 0.42, train accuracy = 81.25%
47. Step 2300, train loss = 0.32, train accuracy = 87.50%
48. Step 2350, train loss = 0.48, train accuracy = 75.00%
49. Step 2400, train loss = 0.54, train accuracy = 71.88%
50. Step 2450, train loss = 0.51, train accuracy = 71.88%
51. Step 2500, train loss = 0.73, train accuracy = 59.38%
52. Step 2550, train loss = 0.52, train accuracy = 78.12%
53. Step 2600, train loss = 0.65, train accuracy = 62.50%
54. Step 2650, train loss = 0.52, train accuracy = 71.88%
55. Step 2700, train loss = 0.48, train accuracy = 71.88%
56. Step 2750, train loss = 0.37, train accuracy = 84.38%
57. Step 2800, train loss = 0.46, train accuracy = 78.12%
58. Step 2850, train loss = 0.40, train accuracy = 84.38%
59. Step 2900, train loss = 0.45, train accuracy = 81.25%
60. Step 2950, train loss = 0.36, train accuracy = 84.38%
61. Step 3000, train loss = 0.46, train accuracy = 75.00%
62. Step 3050, train loss = 0.53, train accuracy = 71.88%
63. Step 3100, train loss = 0.37, train accuracy = 84.38%
64. Step 3150, train loss = 0.53, train accuracy = 75.00%
65. Step 3200, train loss = 0.52, train accuracy = 75.00%
66. Step 3250, train loss = 0.62, train accuracy = 65.62%
67. Step 3300, train loss = 0.58, train accuracy = 71.88%
68. Step 3350, train loss = 0.71, train accuracy = 65.62%
69. Step 3400, train loss = 0.43, train accuracy = 78.12%
70. Step 3450, train loss = 0.46, train accuracy = 78.12%
71. Step 3500, train loss = 0.46, train accuracy = 71.88%
72. Step 3550, train loss = 0.53, train accuracy = 68.75%
73. Step 3600, train loss = 0.44, train accuracy = 75.00%
74. Step 3650, train loss = 0.55, train accuracy = 65.62%
75. Step 3700, train loss = 0.62, train accuracy = 75.00%
76. Step 3750, train loss = 0.48, train accuracy = 75.00%
77. Step 3800, train loss = 0.66, train accuracy = 53.12%
78. Step 3850, train loss = 0.53, train accuracy = 75.00%
79. Step 3900, train loss = 0.36, train accuracy = 81.25%
80. Step 3950, train loss = 0.37, train accuracy = 87.50%
81. Step 4000, train loss = 0.46, train accuracy = 78.12%
82. Step 4050, train loss = 0.36, train accuracy = 84.38%
83. Step 4100, train loss = 0.34, train accuracy = 78.12%
84. Step 4150, train loss = 0.48, train accuracy = 78.12%
85. Step 4200, train loss = 0.43, train accuracy = 87.50%
86. Step 4250, train loss = 0.34, train accuracy = 84.38%
87. Step 4300, train loss = 0.28, train accuracy = 87.50%
88. Step 4350, train loss = 0.19, train accuracy = 96.88%
89. Step 4400, train loss = 0.46, train accuracy = 71.88%
90. Step 4450, train loss = 0.33, train accuracy = 84.38%
91. Step 4500, train loss = 0.55, train accuracy = 75.00%
92. Step 4550, train loss = 0.31, train accuracy = 93.75%
93. Step 4600, train loss = 0.30, train accuracy = 84.38%
94. Step 4650, train loss = 0.38, train accuracy = 84.38%
95. Step 4700, train loss = 0.36, train accuracy = 84.38%
96. Step 4750, train loss = 0.32, train accuracy = 87.50%
97. Step 4800, train loss = 0.36, train accuracy = 81.25%
98. Step 4850, train loss = 0.36, train accuracy = 87.50%
99. Step 4900, train loss = 0.49, train accuracy = 71.88%
100. Step 4950, train loss = 0.51, train accuracy = 68.75%
101. Step 5000, train loss = 0.59, train accuracy = 68.75%
102. Step 5050, train loss = 0.55, train accuracy = 75.00%
103. Step 5100, train loss = 0.71, train accuracy = 68.75%
104. Step 5150, train loss = 0.48, train accuracy = 71.88%
105. Step 5200, train loss = 0.39, train accuracy = 90.62%
106. Step 5250, train loss = 0.49, train accuracy = 81.25%
107. Step 5300, train loss = 0.36, train accuracy = 81.25%
108. Step 5350, train loss = 0.31, train accuracy = 90.62%
109. Step 5400, train loss = 0.39, train accuracy = 87.50%
110. Step 5450, train loss = 0.34, train accuracy = 78.12%
111. Step 5500, train loss = 0.29, train accuracy = 84.38%
112. Step 5550, train loss = 0.21, train accuracy = 93.75%
113. Step 5600, train loss = 0.41, train accuracy = 78.12%
114. Step 5650, train loss = 0.38, train accuracy = 84.38%
115. Step 5700, train loss = 0.27, train accuracy = 87.50%
116. Step 5750, train loss = 0.24, train accuracy = 90.62%
117. Step 5800, train loss = 0.17, train accuracy = 96.88%
118. Step 5850, train loss = 0.23, train accuracy = 93.75%
119. Step 5900, train loss = 0.37, train accuracy = 71.88%
120. Step 5950, train loss = 0.49, train accuracy = 71.88%
121. Step 6000, train loss = 0.43, train accuracy = 81.25%
122. Step 6050, train loss = 0.33, train accuracy = 87.50%
123. Step 6100, train loss = 0.46, train accuracy = 75.00%
124. Step 6150, train loss = 0.61, train accuracy = 81.25%
125. Step 6200, train loss = 0.34, train accuracy = 84.38%
126. Step 6250, train loss = 0.63, train accuracy = 71.88%
127. Step 6300, train loss = 0.21, train accuracy = 90.62%
128. Step 6350, train loss = 0.21, train accuracy = 90.62%
129. Step 6400, train loss = 0.27, train accuracy = 87.50%
130. Step 6450, train loss = 0.17, train accuracy = 87.50%
131. Step 6500, train loss = 0.34, train accuracy = 87.50%
132. Step 6550, train loss = 0.34, train accuracy = 87.50%
133. Step 6600, train loss = 0.32, train accuracy = 84.38%
134. Step 6650, train loss = 0.39, train accuracy = 84.38%
135. Step 6700, train loss = 0.38, train accuracy = 84.38%
136. Step 6750, train loss = 0.41, train accuracy = 84.38%
137. Step 6800, train loss = 0.49, train accuracy = 81.25%
138. Step 6850, train loss = 0.36, train accuracy = 84.38%
139. Step 6900, train loss = 0.20, train accuracy = 93.75%
140. Step 6950, train loss = 0.13, train accuracy = 93.75%
141. Step 7000, train loss = 0.31, train accuracy = 87.50%
142. Step 7050, train loss = 0.18, train accuracy = 93.75%
143. Step 7100, train loss = 0.23, train accuracy = 90.62%
144. Step 7150, train loss = 0.13, train accuracy = 96.88%
145. Step 7200, train loss = 0.14, train accuracy = 96.88%
146. Step 7250, train loss = 0.32, train accuracy = 84.38%
147. Step 7300, train loss = 0.18, train accuracy = 93.75%
148. Step 7350, train loss = 0.14, train accuracy = 100.00%
149. Step 7400, train loss = 0.60, train accuracy = 75.00%
150. Step 7450, train loss = 0.20, train accuracy = 93.75%
151. Step 7500, train loss = 0.13, train accuracy = 93.75%
152. Step 7550, train loss = 0.22, train accuracy = 90.62%
153. Step 7600, train loss = 0.13, train accuracy = 96.88%
154. Step 7650, train loss = 0.20, train accuracy = 93.75%
155. Step 7700, train loss = 0.24, train accuracy = 90.62%
156. Step 7750, train loss = 0.19, train accuracy = 93.75%
157. Step 7800, train loss = 0.16, train accuracy = 93.75%
158. Step 7850, train loss = 0.08, train accuracy = 100.00%
159. Step 7900, train loss = 0.10, train accuracy = 96.88%
160. Step 7950, train loss = 0.13, train accuracy = 93.75%
161. Step 8000, train loss = 0.18, train accuracy = 90.62%
162. Step 8050, train loss = 0.27, train accuracy = 93.75%
163. Step 8100, train loss = 0.04, train accuracy = 100.00%
164. Step 8150, train loss = 0.27, train accuracy = 87.50%
165. Step 8200, train loss = 0.06, train accuracy = 96.88%
166. Step 8250, train loss = 0.12, train accuracy = 100.00%
167. Step 8300, train loss = 0.28, train accuracy = 87.50%
168. Step 8350, train loss = 0.24, train accuracy = 90.62%
169. Step 8400, train loss = 0.16, train accuracy = 93.75%
170. Step 8450, train loss = 0.11, train accuracy = 93.75%
171. Step 8500, train loss = 0.13, train accuracy = 96.88%
172. Step 8550, train loss = 0.05, train accuracy = 100.00%
173. Step 8600, train loss = 0.10, train accuracy = 93.75%
174. Step 8650, train loss = 0.14, train accuracy = 100.00%
175. Step 8700, train loss = 0.21, train accuracy = 90.62%
176. Step 8750, train loss = 0.09, train accuracy = 96.88%
177. Step 8800, train loss = 0.11, train accuracy = 96.88%
178. Step 8850, train loss = 0.10, train accuracy = 96.88%
179. Step 8900, train loss = 0.12, train accuracy = 93.75%
180. Step 8950, train loss = 0.48, train accuracy = 81.25%
181. Step 9000, train loss = 0.07, train accuracy = 100.00%
182. Step 9050, train loss = 0.03, train accuracy = 100.00%
183. Step 9100, train loss = 0.10, train accuracy = 93.75%
184. Step 9150, train loss = 0.05, train accuracy = 96.88%
185. Step 9200, train loss = 0.04, train accuracy = 100.00%
186. Step 9250, train loss = 0.03, train accuracy = 100.00%
187. Step 9300, train loss = 0.04, train accuracy = 96.88%
188. Step 9350, train loss = 0.08, train accuracy = 100.00%
189. Step 9400, train loss = 0.05, train accuracy = 100.00%
190. Step 9450, train loss = 0.15, train accuracy = 90.62%
191. Step 9500, train loss = 0.03, train accuracy = 100.00%
192. Step 9550, train loss = 0.05, train accuracy = 100.00%
193. Step 9600, train loss = 0.15, train accuracy = 96.88%
194. Step 9650, train loss = 0.03, train accuracy = 100.00%
195. Step 9700, train loss = 0.02, train accuracy = 100.00%
196. Step 9750, train loss = 0.08, train accuracy = 96.88%
197. Step 9800, train loss = 0.04, train accuracy = 100.00%
198. Step 9850, train loss = 0.06, train accuracy = 96.88%
199. Step 9900, train loss = 0.03, train accuracy = 100.00%
200. Step 9950, train loss = 0.03, train accuracy = 100.00%
201. Step 10000, train loss = 0.11, train accuracy = 93.75%
202. Step 10050, train loss = 0.02, train accuracy = 100.00%
203. Step 10100, train loss = 0.01, train accuracy = 100.00%
204. Step 10150, train loss = 0.05, train accuracy = 96.88%
205. Step 10200, train loss = 0.07, train accuracy = 96.88%
206. Step 10250, train loss = 0.06, train accuracy = 96.88%
207. Step 10300, train loss = 0.03, train accuracy = 100.00%
208. Step 10350, train loss = 0.08, train accuracy = 96.88%
209. Step 10400, train loss = 0.05, train accuracy = 96.88%
210. Step 10450, train loss = 0.02, train accuracy = 100.00%
211. Step 10500, train loss = 0.22, train accuracy = 93.75%
212. Step 10550, train loss = 0.06, train accuracy = 100.00%
213. Step 10600, train loss = 0.02, train accuracy = 100.00%
214. Step 10650, train loss = 0.02, train accuracy = 100.00%
215. Step 10700, train loss = 0.03, train accuracy = 100.00%
216. Step 10750, train loss = 0.15, train accuracy = 96.88%
217. Step 10800, train loss = 0.05, train accuracy = 100.00%
218. Step 10850, train loss = 0.02, train accuracy = 100.00%
219. Step 10900, train loss = 0.04, train accuracy = 96.88%
220. Step 10950, train loss = 0.05, train accuracy = 96.88%
221. Step 11000, train loss = 0.02, train accuracy = 100.00%
222. Step 11050, train loss = 0.10, train accuracy = 96.88%
223. Step 11100, train loss = 0.08, train accuracy = 96.88%
224. Step 11150, train loss = 0.02, train accuracy = 100.00%
225. Step 11200, train loss = 0.01, train accuracy = 100.00%
226. Step 11250, train loss = 0.06, train accuracy = 96.88%
227. Step 11300, train loss = 0.18, train accuracy = 93.75%
228. Step 11350, train loss = 0.02, train accuracy = 100.00%
229. Step 11400, train loss = 0.04, train accuracy = 100.00%
230. Step 11450, train loss = 0.03, train accuracy = 100.00%
231. Step 11500, train loss = 0.01, train accuracy = 100.00%
232. Step 11550, train loss = 0.02, train accuracy = 100.00%

核心代码

1.         weights = tf.get_variable('weights',  
2.                                   shape=[3, 3, 3, 16],  
3.                                   dtype=tf.float32,  
4.                                   initializer=tf.truncated_normal_initializer(stddev=0.1, dtype=tf.float32))  
5.         biases = tf.get_variable('biases',  
6.                                  shape=[16],  
7.                                  dtype=tf.float32,  
8.                                  initializer=tf.constant_initializer(0.1))  
9.         conv = tf.nn.conv2d(images, weights, strides=[1, 1, 1, 1], padding='SAME')  
10.         pre_activation = tf.nn.bias_add(conv, biases)  
11.         conv1 = tf.nn.relu(pre_activation, name=scope.name)  
12. with tf.variable_scope('pooling1_lrn') as scope:  
13.             pool1 = tf.nn.max_pool(conv1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pooling1')  
14.             norm1 = tf.nn.lrn(pool1, depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm1')  
15. 
16. with tf.variable_scope('conv2') as scope:  
17.                 weights = tf.get_variable('weights',  
18.                                           shape=[3, 3, 16, 16],  
19.                                           dtype=tf.float32,  
20.                                           initializer=tf.truncated_normal_initializer(stddev=0.1, dtype=tf.float32))  
21.                 biases = tf.get_variable('biases',  
22.                                          shape=[16],  
23.                                          dtype=tf.float32,  
24.                                          initializer=tf.constant_initializer(0.1))  
25.                 conv = tf.nn.conv2d(norm1, weights, strides=[1, 1, 1, 1], padding='SAME')  
26.                 pre_activation = tf.nn.bias_add(conv, biases)  
27.                 conv2 = tf.nn.relu(pre_activation, name='conv2')  
28. 
29. 
30. with tf.variable_scope('pooling2_lrn') as scope:  
31.         norm2 = tf.nn.lrn(conv2, depth_radius=4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='norm2')  
32.         pool2 = tf.nn.max_pool(norm2, ksize=[1, 3, 3, 1], strides=[1, 1, 1, 1], padding='SAME', name='pooling2')  
33. 
34. with tf.variable_scope('local3') as scope:  
35.         reshape = tf.reshape(pool2, shape=[batch_size, -1])  
36.         dim = reshape.get_shape()[1].value  
37.         weights = tf.get_variable('weights',  
38.                                   shape=[dim, 128],  
39.                                   dtype=tf.float32,  
40.                                   initializer=tf.truncated_normal_initializer(stddev=0.005, dtype=tf.float32))  
41.         biases = tf.get_variable('biases',  
42.                                  shape=[128],  
43.                                  dtype=tf.float32,  
44.                                  initializer=tf.constant_initializer(0.1))  
45.     local3 = tf.nn.relu(tf.matmul(reshape, weights) + biases, name=scope.name)  
46. 
47. # local4  
48. with tf.variable_scope('local4') as scope:  
49.         weights = tf.get_variable('weights',  
50.                                   shape=[128, 128],  
51.                                   dtype=tf.float32,  
52.                                   initializer=tf.truncated_normal_initializer(stddev=0.005, dtype=tf.float32))  
53.         biases = tf.get_variable('biases',  
54.                                  shape=[128],  
55.                                  dtype=tf.float32,  
56.                                  initializer=tf.constant_initializer(0.1))  
57.         local4 = tf.nn.relu(tf.matmul(local3, weights) + biases, name='local4')  
58. 
59. 
60. with tf.variable_scope('softmax_linear') as scope:  
61.         weights = tf.get_variable('softmax_linear',  
62.                                   shape=[128, n_classes],  
63.                                   dtype=tf.float32,  
64.                                   initializer=tf.truncated_normal_initializer(stddev=0.005, dtype=tf.float32))  
65.         biases = tf.get_variable('biases',  
66.                                  shape=[n_classes],  
67.                                  dtype=tf.float32,  
68.                                  initializer=tf.constant_initializer(0.1))  
69.         softmax_linear = tf.add(tf.matmul(local4, weights), biases, name='softmax_linear')


相关文章
|
6月前
|
并行计算 Shell TensorFlow
Tensorflow-GPU训练MTCNN出现错误-Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
在使用TensorFlow-GPU训练MTCNN时,如果遇到“Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED”错误,通常是由于TensorFlow、CUDA和cuDNN版本不兼容或显存分配问题导致的,可以通过安装匹配的版本或在代码中设置动态显存分配来解决。
117 1
Tensorflow-GPU训练MTCNN出现错误-Could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
|
1月前
|
算法 数据挖掘 数据安全/隐私保护
基于CS模型和CV模型的多目标协同滤波跟踪算法matlab仿真
本项目基于CS模型和CV模型的多目标协同滤波跟踪算法,旨在提高复杂场景下多个移动目标的跟踪精度和鲁棒性。通过融合目标间的关系和数据关联性,优化跟踪结果。程序在MATLAB2022A上运行,展示了真实轨迹与滤波轨迹的对比、位置及速度误差均值和均方误差等关键指标。核心代码包括对目标轨迹、速度及误差的详细绘图分析,验证了算法的有效性。该算法结合CS模型的初步聚类和CV模型的投票机制,增强了目标状态估计的准确性,尤其适用于遮挡、重叠和快速运动等复杂场景。
|
6月前
|
数据采集 TensorFlow 算法框架/工具
【大作业-03】手把手教你用tensorflow2.3训练自己的分类数据集
本教程详细介绍了如何使用TensorFlow 2.3训练自定义图像分类数据集,涵盖数据集收集、整理、划分及模型训练与测试全过程。提供完整代码示例及图形界面应用开发指导,适合初学者快速上手。[教程链接](https://www.bilibili.com/video/BV1rX4y1A7N8/),配套视频更易理解。
139 0
【大作业-03】手把手教你用tensorflow2.3训练自己的分类数据集
|
6月前
|
机器学习/深度学习 SQL 数据采集
基于tensorflow、CNN网络识别花卉的种类(图像识别)
基于tensorflow、CNN网络识别花卉的种类(图像识别)
183 1
|
8月前
|
UED 存储 数据管理
深度解析 Uno Platform 离线状态处理技巧:从网络检测到本地存储同步,全方位提升跨平台应用在无网环境下的用户体验与数据管理策略
【8月更文挑战第31天】处理离线状态下的用户体验是现代应用开发的关键。本文通过在线笔记应用案例,介绍如何使用 Uno Platform 优雅地应对离线状态。首先,利用 `NetworkInformation` 类检测网络状态;其次,使用 SQLite 实现离线存储;然后,在网络恢复时同步数据;最后,通过 UI 反馈提升用户体验。
216 0
|
8月前
|
安全 Apache 数据安全/隐私保护
你的Wicket应用安全吗?揭秘在Apache Wicket中实现坚不可摧的安全认证策略
【8月更文挑战第31天】在当前的网络环境中,安全性是任何应用程序的关键考量。Apache Wicket 是一个强大的 Java Web 框架,提供了丰富的工具和组件,帮助开发者构建安全的 Web 应用程序。本文介绍了如何在 Wicket 中实现安全认证,
91 0
|
8月前
|
机器学习/深度学习 数据采集 TensorFlow
从零到精通:TensorFlow与卷积神经网络(CNN)助你成为图像识别高手的终极指南——深入浅出教你搭建首个猫狗分类器,附带实战代码与训练技巧揭秘
【8月更文挑战第31天】本文通过杂文形式介绍了如何利用 TensorFlow 和卷积神经网络(CNN)构建图像识别系统,详细演示了从数据准备、模型构建到训练与评估的全过程。通过具体示例代码,展示了使用 Keras API 训练猫狗分类器的步骤,旨在帮助读者掌握图像识别的核心技术。此外,还探讨了图像识别在物体检测、语义分割等领域的广泛应用前景。
131 0
|
3月前
|
机器学习/深度学习 算法 计算机视觉
基于CNN卷积神经网络的金融数据预测matlab仿真,对比BP,RBF,LSTM
本项目基于MATLAB2022A,利用CNN卷积神经网络对金融数据进行预测,并与BP、RBF和LSTM网络对比。核心程序通过处理历史价格数据,训练并测试各模型,展示预测结果及误差分析。CNN通过卷积层捕捉局部特征,BP网络学习非线性映射,RBF网络进行局部逼近,LSTM解决长序列预测中的梯度问题。实验结果表明各模型在金融数据预测中的表现差异。
247 10
|
5月前
|
机器学习/深度学习 人工智能 自然语言处理
深入理解深度学习中的卷积神经网络(CNN)
深入理解深度学习中的卷积神经网络(CNN)
259 10
|
4月前
|
机器学习/深度学习 人工智能 自然语言处理
深入理解深度学习中的卷积神经网络(CNN)##
在当今的人工智能领域,深度学习已成为推动技术革新的核心力量之一。其中,卷积神经网络(CNN)作为深度学习的一个重要分支,因其在图像和视频处理方面的卓越性能而备受关注。本文旨在深入探讨CNN的基本原理、结构及其在实际应用中的表现,为读者提供一个全面了解CNN的窗口。 ##

热门文章

最新文章

下一篇
oss创建bucket