MRCNN处理
现在来看看mrcnn本身,我们需要在训练过程之前定义一个mrcnn数据集类。这个数据集类提供图像的信息,比如它所属的类和对象在其中的位置。mrcnn.utils包含这个类
这里的事情变得有点棘手,需要阅读一些源代码。这些是你需要修改的功能:
https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/utils.py
- add_class,用于确定模型的类数
- 添加映像,在其中定义映像id和映像路径(如果适用)
- 加载图像,其中加载图像数据
- 加载掩码,获取有关图像的掩码/边框的信息
# define drones dataset using mrcnn utils classclass DronesDataset(utils.Dataset): def __init__(self,X,y): # init with numpy X,y self.X = X self.y = y super().__init__()def load_dataset(self): self.add_class("dataset",1,"drones") # only 1 class, drones for i in range(len(self.X)): self.add_image("dataset",i,path=None)def load_image(self,image_id): image = self.X[image_id] # where image_id is index of X return imagedef load_mask(self,image_id): # get details of image info = self.image_info[image_id] #create one array for all masks, each on a different channel masks = np.zeros([128, 128, len(self.X)], dtype='uint8')class_ids = [] for i in range(len(self.y)): box = self.y[info["id"]] row_s, row_e = box[0], box[1] col_s, col_e = box[2], box[3] masks[row_s:row_e, col_s:col_e, i] = 1 # create mask with similar boundaries as bounding box class_ids.append(1)return masks, np.array(class_ids).astype(np.uint8)
我们已经将图像格式化为NumPy数组,因此可以简单地用数组初始化Dataset类,并通过索引到数组中来加载图像和边界框。
接下来分割训练和测试集。
# train test split 80:20np.random.seed(42) # for reproducibility p = np.random.permutation(len(X)) X = X[p].copy() y = y[p].copy()split = int(0.8 * len(X))X_train = X[:split] y_train = y[:split]X_val = X[split:] y_val = y[split:]
现在将数据加载到数据集类中。
# load dataset into mrcnn dataset classtrain_dataset = DronesDataset(X_train,y_train) train_dataset.load_dataset() train_dataset.prepare()val_dataset = DronesDataset(X_val,y_val) val_dataset.load_dataset() val_dataset.prepare()
prepare()函数使用图像ID和类ID信息为mrcnn模型准备数据,下面是我们从mrcnn导入的config类的修改。Config类确定训练中使用的变量,应该根据数据集进行调整。
下面的这些变量并非详尽无遗,您可以参考文档中的完整列表。
class DronesConfig(Config): # Give the configuration a recognizable name NAME = "drones"# Train on 1 GPU and 2 images per GPU. GPU_COUNT = 1 IMAGES_PER_GPU = 2# Number of classes (including background) NUM_CLASSES = 1+1 # background + drones# Use small images for faster training. IMAGE_MIN_DIM = 128 IMAGE_MAX_DIM = 128# Reduce training ROIs per image because the images are small and have few objects. TRAIN_ROIS_PER_IMAGE = 20# Use smaller anchors because our image and objects are small RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels# set appropriate step per epoch and validation step STEPS_PER_EPOCH = len(X_train)//(GPU_COUNT*IMAGES_PER_GPU) VALIDATION_STEPS = len(X_val)//(GPU_COUNT*IMAGES_PER_GPU)# Skip detections with < 70% confidence DETECTION_MIN_CONFIDENCE = 0.7config = DronesConfig() config.display()
根据您的计算能力,您可能需要相应地调整这些变量。否则,您将面临卡在“Epoch 1”的问题,并且不会给出错误消息。甚至还有针对这个问题提出的GitHub问题,并提出了许多解决方案。如果你遇到这种情况,一定要检查一下,并测试一下这些建议中的一些。
https://github.com/matterport/Mask_RCNN/issues/287
MRCNN 训练
mrcnn通过COCO和ImageNet数据集进行了训练。所以这里只要使用这些预先训练的权重进行迁移学习,我们需要将其下载到环境中(记住首先定义根目录)
# Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")# Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH)
创建模型并使用预先训练的权重。
# Create model in training mode using gpuwith tf.device("/gpu:0"): model = modellib.MaskRCNN(mode="training", config=config,model_dir=MODEL_DIR)# Which weights to start with? init_with = "imagenet" # imagenet, cocoif init_with == "imagenet": model.load_weights(model.get_imagenet_weights(), by_name=True) elif init_with == "coco": # Load weights trained on MS COCO, but skip layers that # are different due to the different number of classes # See README for instructions to download the COCO weights model.load_weights(COCO_MODEL_PATH, by_name=True,exclude=["mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
现在,我们可以开始进行实际训练。
model.train(train_dataset, val_dataset,learning_rate=config.LEARNING_RATE,epochs=5,layers='heads') # unfreeze head and just train on last layer
我只训练最后一层来检测数据集中的无人机。如果时间允许,您还应该通过训练前面的所有层来微调模型。
model.train(train_dataset, val_dataset, learning_rate=config.LEARNING_RATE / 10, epochs=2, layers="all")
完成了mrcnn模型的训练后。可以用这两行代码保存模型的权重。
# save weights model_path = os.path.join(MODEL_DIR, "mask_rcnn_drones.h5") model.keras_model.save_weights(model_path)
MRCNN推断
要对其他图片进行推理,需要创建一个具有自定义配置的新推理模型。
# make inferenceclass InferenceConfig(DronesConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1inference_config = InferenceConfig()# Recreate the model in inference mode model = modellib.MaskRCNN(mode="inference",config=inference_config, model_dir=MODEL_DIR)# Load trained weightsmodel_path = os.path.join(MODEL_DIR, "mask_rcnn_drones.h5") model.load_weights(model_path, by_name=True)
可视化
def get_ax(rows=1, cols=1, size=8): _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))return ax# Test on a random image image_id = random.choice(val_dataset.image_ids) original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(val_dataset, inference_config,image_id, use_mini_mask=False)results = model.detect([original_image], verbose=1) r = results[0]visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'],val_dataset.class_names, r['scores'], ax=get_ax())
好了,我们已经训练了一个带有自定义数据集的mrcnn模型。