开发者社区> 问答> 正文

使用alexnet和目录流来训练灰度数据集

这是我的参考: 来自目录示例 alexnet体系结构的流

我尝试使用alexnet架构训练3个类别。数据集是灰度图像。我将第一个链接修改为分类类模式,然后将CNN模型修改为第二个链接的alexnet。我收到2条错误消息:

ValueError:负尺寸大小是由于输入形状为[?,1,1,384],[3,3,384,384]的'conv2d_83 / convolution'(op:'Conv2D')的值从1中减去3引起的。

如果更改img_width,则img_height = 224,224 TypeError:Dense只能接受1个位置参数(“单位”,),但是您传递了以下位置参数:[4096,(224,224,1)]

CNN中的尺寸是否无法比拟?谢谢

这是代码:

import json
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
#from tensorflow.keras.optimizers import RMSprop


# dimensions of our images.
img_width, img_height = 150,150

train_data_dir = 'data/train'
validation_data_dir = 'data/validation'
nb_train_samples = 200*3
nb_validation_samples = 50*3
epochs = 1
batch_size = 5

if K.image_data_format() == 'channels_first':
    input_shape = (1, img_width, img_height)
else:
    input_shape = (img_width, img_height, 1)
print(input_shape)
model = Sequential()
model.add(Conv2D(filters=96, input_shape=input_shape,data_format='channels_last', kernel_size=(11,11), strides=(4,4), padding='valid'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))

model.add(Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))

model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2, 2)))

# 4th Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))

# 5th Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding='valid'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding='valid'))

model.add(Flatten())
model.add(Dense(4096, input_shape))
model.add(Activation('relu'))
model.add(Dropout(0.4))

model.add(Dense(4096))
model.add(Activation('relu'))
model.add(Dropout(0.4))

model.add(Dense(1000))
model.add(Activation('relu'))
model.add(Dropout(0.4))

# Output Layer
model.add(Dense(3))
model.add(Activation('softmax'))

model.summary()

# Compile the model
model.compile(loss=keras.losses.categorical_crossentropy, optimizer='adam', metrics=['accuracy'])

#model.compile(loss='categorical_crossentropy',optimizer=RMSprop(lr=0.001),metrics=['accuracy'])

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    color_mode='grayscale',
    batch_size=batch_size,
    class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    color_mode='grayscale',
    batch_size=batch_size,
    class_mode='categorical')

model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=nb_validation_samples // batch_size)

model_json = model.to_json()
with open("model_in_json.json", "w") as json_file:
    json.dump(model_json, json_file)

model.save_weights("model_weights.h5")

展开
收起
几许相思几点泪 2019-12-29 19:39:35 2245 0
1 条回答
写回答
取消 提交回答
  • AlexNet适用于input_size227x227。本文提到224x224,但这是一个错字。这并不是说您不能使用其他大小,但是与那时相比,该体系结构将失去意义。当输入大小太小时,出现更多的发音问题。步幅= 2的卷积和最大池化操作降低了后续层的维数。您只用完了尺寸,用ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_83/convolution'

    错误源于model.add(Dense(4096, input_shape))。如果检查keras文档Dense层,您会注意到第二个参数是activation。如果有的话,您应该使用model.add(Dense(4096, input_shape=your_input_shape))。

    2019-12-29 19:39:47
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
展心展力MetaApp:基于DeepRec的稀疏模型训练实践 立即下载
《DeepRec:大规模稀疏模型训练引擎》 立即下载
140-弱监督机器学...1506573734.pdf 立即下载