计算机视觉,算法应用自定义数据集制作

简介: 计算机视觉,算法应用自定义数据集制作


本文分别介绍paddlex快捷制作数据集和纯代码制作数据集二种方式。


paddlex安装步骤如下:


git clone https://github.com/PaddlePaddle/PaddleX.git
cd PaddleX
git checkout release/1.3
python setup.py install


如果遇到pycocotools安装问题


PaddleX依赖pycocotools包,如安装pycocotools失败,可参照如下方式安装pycocotools

  • Windows
pip install cython
pip install git+https://gitee.com/jiangjiajun/philferriere-cocoapi.git#subdirectory=PythonAPI


  • Linux/Mac

pip install cython  
pip install pycocotools


1.图像分类数据


官方格式类型叫 ImageNet

格式


MyDataset/ # 图像分类数据集根目录
|--crystal/ # 当前文件夹所有图片属于刘亦菲类别
|  |--crystal1.jpg
|  |--crystal2.jpg
|  |--...
|  |--...
|
|--...
|
|--Mimi/ # 当前文件夹所有图片属于杨幂类别
|  |--Mimi1.jpg
|  |--Mimi2.jpg
|  |--...
|  |--...
|# 下面就是我们要得到的文件:
|--labels.txt
|--train_list.txt
|--test_list.txt
|--val_list.txt


-labels.txt,train_list.txt,test_list.txt,val_list.txt是我们生成的文件。

labels.txt

labels.txt用于列出所有类别,类别对应行号表示模型训练过程中类别的id(行号从0开始计数),例如labels.txt为以下内容


crystal
Mimi


train_list.txt

train_list.txt列出用于训练时的图片集合,与其对应的类别id,示例如下


crystal/1.jpg 0
crystal/2.jpg 0
Mimi/1.jpg 1


1.1 代码实现


这里以花图像分类为例,链接在下面

dataset


import os
import random
import codecs
import shutil
from PIL import Image
train_ratio = 4.0 / 5
all_file_dir = 'flowers'
class_list = [c for c in os.listdir(all_file_dir) if
              os.path.isdir(os.path.join(all_file_dir, c)) and not c.endswith('Set') and not c.startswith('.')]
class_list.sort()
print(class_list)
train_image_dir = os.path.join(all_file_dir, "trainImageSet")
if not os.path.exists(train_image_dir):
    os.makedirs(train_image_dir)
eval_image_dir = os.path.join(all_file_dir, "evalImageSet")
if not os.path.exists(eval_image_dir):
    os.makedirs(eval_image_dir)
train_file = codecs.open(os.path.join(all_file_dir, "train.txt"), 'w')
eval_file = codecs.open(os.path.join(all_file_dir, "eval.txt"), 'w')
with codecs.open(os.path.join(all_file_dir, "label_list.txt"), "w") as label_list:
    label_id = 0
    for class_dir in class_list:
        label_list.write("{0}\t{1}\n".format(label_id, class_dir))
        image_path_pre = os.path.join(all_file_dir, class_dir)
        for file in os.listdir(image_path_pre):
            try:
                img = Image.open(os.path.join(image_path_pre, file))
                if random.uniform(0, 1) <= train_ratio:
                    shutil.copyfile(os.path.join(image_path_pre, file), os.path.join(train_image_dir, file))
                    train_file.write("{0}\t{1}\n".format(os.path.join(train_image_dir, file), label_id))
                else:
                    shutil.copyfile(os.path.join(image_path_pre, file), os.path.join(eval_image_dir, file))
                    eval_file.write("{0}\t{1}\n".format(os.path.join(eval_image_dir, file), label_id))
            except Exception as e:
                pass
                # 存在一些文件打不开,此处需要稍作清洗
        label_id += 1
train_file.close()
eval_file.close()


1.2 paddlex实现


paddlex --split_dataset --format ImageNet --dataset_dir MyDataset --val_value 0.2 --test_value 0.1
• 1

其中MyDataset就是自己的数据文件夹


2.目标检测数据


目标检测的数据比分类复杂,一张图像中,需要标记出各个目标区域的位置和类别。


一般的目标区域位置用一个矩形框来表示,一般用以下3种方式表达:


表达方式 说明
x1,y1,x2,y2 (x1,y1)为左上角坐标,(x2,y2)为右下角坐标
x1,y1,w,h (x1,y1)为左上角坐标,w为目标区域宽度,h为目标区域高度
xc,yc,w,h (xc,yc)为目标区域中心坐标,w为目标区域宽度,h为目标区域高度


VOC采用的[x1,y1,x2,y2] 表示物体的bounding box

COCO采用的[x1,y1,w,h] 表示物体的bounding box


目标检测数据分三种类型:

  • VOC
  • COCO
  • 用户自定义

目标检测标注二种工具:

  • labelme:标注文件json – 对应模型的coco数据集类型
  • labelimg:标注文件xml – 对应模型的voc数据集类型

有手就行,看下就会标注了。


2.1 VOC


下面这个目录就是我们要得到的文件情况


MyDataset/ 
├── Annotations
│   ├── 000.xml
│   ├── 001.xml
│   ├── 002.xml
│   ├── 003.xml
│   ├── ......
├── JPEGImages
│   ├── 000.jpg
│   ├── 001.jpg
│   ├── 002.jpg
│   ├── 003.jpg
│   ├── ......
├── label_list.txt
├── test.txt
├── trainval.txt
└── val.txt


后面txt文件,就是我们需要得到的

trainval.txt


JPEGImages/1.jpg Annotations/1.xml
JPEGImages/2.jpg Annotations/2.xml
... ...


2.1.1 代码实现


import os
import random
trainval_percent = 0.1
train_percent = 0.9
xmlfilepath = 'MyDataset/Annotations'
txtsavepath = 'MyDataset/JPEGImages'
total_xml = os.listdir(xmlfilepath)
num = len(total_xml)
list = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list, tv)
train = random.sample(trainval, tr)
ftrain = open('MyDataset/JPEGImages/Main/train.txt', 'w')
ftest = open('MyDataset/JPEGImages/Main/test.txt', 'w')
ftrain = open('MyDataset/JPEGImages/Main/train.txt', 'w')
fval = open('MyDataset/JPEGImages/Main/val.txt', 'w')
for i in list:
    name = total_xml[i][:-4] + '\n'
    if i in trainval:
        ftrainval.write(name)
        if i in train:
            ftest.write(name)
        else:
            fval.write(name)
    else:
        ftrain.write(name)
ftrain.close()
fval.close()
ftest.close()
import xml.etree.ElementTree as ET
import pickle
import os
from os import listdir, getcwd
from os.path import join
sets = ['train', 'test','val']
#自己的class类别
classes = ["dogs",'cats']
def convert(size, box):
    dw = 1./size[0]
    dh = 1./size[1]
    x = (box[0] + box[1])/2.0
    y = (box[2] + box[3])/2.0
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)
def convert_annotation(image_id):
    in_file = open('MyDataset/Annotations/%s.xml' % (image_id))
    out_file = open('MyDataset/labels/%s.txt' % (image_id), 'w')
    tree = ET.parse(in_file)
    root = tree.getroot()
    size = root.find('size')
    w = int(size.find('width').text)
    h = int(size.find('height').text)
    for obj in root.iter('object'):
        if obj.find('difficult'):
            difficult = obj.find('difficult').text
        else:
            difficult = 0
        cls = obj.find('name').text
        if cls not in classes or int(difficult) == 1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text),
             float(xmlbox.find('ymax').text))
        bb = convert((w, h), b)
        out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
wd = getcwd()
print(wd)
for image_set in sets:
    if not os.path.exists('MyDataset/labels'):
        os.makedirs('MyDataset/labels')
    image_ids = open('MyDataset/JPEGImages/Main/%s.txt' % (image_set)).read().strip().split()
    list_file = open('MyDataset/%s.txt' % (image_set), 'w')
    for image_id in image_ids:
        list_file.write('MyDataset/JPEGImages/%s.jpg\n' % (image_id))
        convert_annotation(image_id)
    list_file.close()


搞完这二个代码,把label文件和Main文件删了就可以了


2.1.2 paddlex实现


paddlex --split_dataset --format VOC --dataset_dir MyDataset --val_value 0.2 --test_value 0.1


2.2 COCO


MyDataset/ # 实例数据集根目录
|--JPEGImages/ # 原图文件所在目录
|  |--1.jpg
|  |--2.jpg
|  |--...
|  |--...
|
|--annotations # 标注文件所在目录

转换成下面这样的格式:


├── annotations
│   ├── instances_test2017.json
│   ├── instances_train2017.json
│   └── instances_val2017.json
├── test2017
├── train2017
│   ├── 000.jpg
│   ├── 001.jpg
│   ├── 002.jpg
│   ├── ......


2.2.1 代码实现


import argparse
import glob
import json
import os
import os.path as osp
import shutil
import xml.etree.ElementTree as ET
import numpy as np
import PIL.ImageDraw
from tqdm import tqdm
import cv2
label_to_num = {}
categories_list = []
labels_list = []
class MyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.integer):
            return int(obj)
        elif isinstance(obj, np.floating):
            return float(obj)
        elif isinstance(obj, np.ndarray):
            return obj.tolist()
        else:
            return super(MyEncoder, self).default(obj)
def images_labelme(data, num):
    image = {}
    image['height'] = data['imageHeight']
    image['width'] = data['imageWidth']
    image['id'] = num + 1
    if '\\' in data['imagePath']:
        image['file_name'] = data['imagePath'].split('\\')[-1]
    else:
        image['file_name'] = data['imagePath'].split('/')[-1]
    return image
def images_cityscape(data, num, img_file):
    image = {}
    image['height'] = data['imgHeight']
    image['width'] = data['imgWidth']
    image['id'] = num + 1
    image['file_name'] = img_file
    return image
def categories(label, labels_list):
    category = {}
    category['supercategory'] = 'component'
    category['id'] = len(labels_list) + 1
    category['name'] = label
    return category
def annotations_rectangle(points, label, image_num, object_num, label_to_num):
    annotation = {}
    seg_points = np.asarray(points).copy()
    seg_points[1, :] = np.asarray(points)[2, :]
    seg_points[2, :] = np.asarray(points)[1, :]
    annotation['segmentation'] = [list(seg_points.flatten())]
    annotation['iscrowd'] = 0
    annotation['image_id'] = image_num + 1
    annotation['bbox'] = list(
        map(float, [
            points[0][0], points[0][1], points[1][0] - points[0][0], points[1][
                1] - points[0][1]
        ]))
    annotation['area'] = annotation['bbox'][2] * annotation['bbox'][3]
    annotation['category_id'] = label_to_num[label]
    annotation['id'] = object_num + 1
    return annotation
def annotations_polygon(height, width, points, label, image_num, object_num,
                        label_to_num):
    annotation = {}
    annotation['segmentation'] = [list(np.asarray(points).flatten())]
    annotation['iscrowd'] = 0
    annotation['image_id'] = image_num + 1
    annotation['bbox'] = list(map(float, get_bbox(height, width, points)))
    annotation['area'] = annotation['bbox'][2] * annotation['bbox'][3]
    annotation['category_id'] = label_to_num[label]
    annotation['id'] = object_num + 1
    return annotation
def get_bbox(height, width, points):
    polygons = points
    mask = np.zeros([height, width], dtype=np.uint8)
    mask = PIL.Image.fromarray(mask)
    xy = list(map(tuple, polygons))
    PIL.ImageDraw.Draw(mask).polygon(xy=xy, outline=1, fill=1)
    mask = np.array(mask, dtype=bool)
    index = np.argwhere(mask == 1)
    rows = index[:, 0]
    clos = index[:, 1]
    left_top_r = np.min(rows)
    left_top_c = np.min(clos)
    right_bottom_r = np.max(rows)
    right_bottom_c = np.max(clos)
    return [
        left_top_c, left_top_r, right_bottom_c - left_top_c,
        right_bottom_r - left_top_r
    ]
def deal_json(ds_type, img_path, json_path):
    data_coco = {}
    images_list = []
    annotations_list = []
    image_num = -1
    object_num = -1
    for img_file in os.listdir(img_path):
        img_label = os.path.splitext(img_file)[0]
        if img_file.split('.')[
                -1] not in ['bmp', 'jpg', 'jpeg', 'png', 'JPEG', 'JPG', 'PNG']:
            continue
        label_file = osp.join(json_path, img_label + '.json')
        print('Generating dataset from:', label_file)
        image_num = image_num + 1
        with open(label_file) as f:
            data = json.load(f)
            if ds_type == 'labelme':
                images_list.append(images_labelme(data, image_num))
            elif ds_type == 'cityscape':
                images_list.append(images_cityscape(data, image_num, img_file))
            if ds_type == 'labelme':
                for shapes in data['shapes']:
                    object_num = object_num + 1
                    label = shapes['label']
                    if label not in labels_list:
                        categories_list.append(categories(label, labels_list))
                        labels_list.append(label)
                        label_to_num[label] = len(labels_list)
                    p_type = shapes['shape_type']
                    if p_type == 'polygon':
                        points = shapes['points']
                        annotations_list.append(
                            annotations_polygon(data['imageHeight'], data[
                                'imageWidth'], points, label, image_num,
                                                object_num, label_to_num))
                    if p_type == 'rectangle':
                        (x1, y1), (x2, y2) = shapes['points']
                        x1, x2 = sorted([x1, x2])
                        y1, y2 = sorted([y1, y2])
                        points = [[x1, y1], [x2, y2], [x1, y2], [x2, y1]]
                        annotations_list.append(
                            annotations_rectangle(points, label, image_num,
                                                  object_num, label_to_num))
            elif ds_type == 'cityscape':
                for shapes in data['objects']:
                    object_num = object_num + 1
                    label = shapes['label']
                    if label not in labels_list:
                        categories_list.append(categories(label, labels_list))
                        labels_list.append(label)
                        label_to_num[label] = len(labels_list)
                    points = shapes['polygon']
                    annotations_list.append(
                        annotations_polygon(data['imgHeight'], data[
                            'imgWidth'], points, label, image_num, object_num,
                                            label_to_num))
    data_coco['images'] = images_list
    data_coco['categories'] = categories_list
    data_coco['annotations'] = annotations_list
    return data_coco
def voc_get_label_anno(ann_dir_path, ann_ids_path, labels_path):
    with open(labels_path, 'r') as f:
        labels_str = f.read().split()
    labels_ids = list(range(1, len(labels_str) + 1))
    with open(ann_ids_path, 'r') as f:
        ann_ids = [lin.strip().split(' ')[-1] for lin in f.readlines()]
    ann_paths = []
    for aid in ann_ids:
        if aid.endswith('xml'):
            ann_path = os.path.join(ann_dir_path, aid)
        else:
            ann_path = os.path.join(ann_dir_path, aid + '.xml')
        ann_paths.append(ann_path)
    return dict(zip(labels_str, labels_ids)), ann_paths
def voc_get_image_info(annotation_root, im_id):
    filename = annotation_root.findtext('filename')
    assert filename is not None
    img_name = os.path.basename(filename)
    size = annotation_root.find('size')
    width = float(size.findtext('width'))
    height = float(size.findtext('height'))
    image_info = {
        'file_name': filename,
        'height': height,
        'width': width,
        'id': im_id
    }
    return image_info
def voc_get_coco_annotation(obj, label2id):
    label = obj.findtext('name')
    assert label in label2id, "label is not in label2id."
    category_id = label2id[label]
    bndbox = obj.find('bndbox')
    xmin = float(bndbox.findtext('xmin'))
    ymin = float(bndbox.findtext('ymin'))
    xmax = float(bndbox.findtext('xmax'))
    ymax = float(bndbox.findtext('ymax'))
    assert xmax > xmin and ymax > ymin, "Box size error."
    o_width = xmax - xmin
    o_height = ymax - ymin
    anno = {
        'area': o_width * o_height,
        'iscrowd': 0,
        'bbox': [xmin, ymin, o_width, o_height],
        'category_id': category_id,
        'ignore': 0,
    }
    return anno
def voc_xmls_to_cocojson(annotation_paths, label2id, output_dir, output_file):
    output_json_dict = {
        "images": [],
        "type": "instances",
        "annotations": [],
        "categories": []
    }
    bnd_id = 1  # bounding box start id
    im_id = 0
    print('Start converting !')
    for a_path in tqdm(annotation_paths):
        # Read annotation xml
        ann_tree = ET.parse(a_path)
        ann_root = ann_tree.getroot()
        img_info = voc_get_image_info(ann_root, im_id)
        output_json_dict['images'].append(img_info)
        for obj in ann_root.findall('object'):
            ann = voc_get_coco_annotation(obj=obj, label2id=label2id)
            ann.update({'image_id': im_id, 'id': bnd_id})
            output_json_dict['annotations'].append(ann)
            bnd_id = bnd_id + 1
        im_id += 1
    for label, label_id in label2id.items():
        category_info = {'supercategory': 'none', 'id': label_id, 'name': label}
        output_json_dict['categories'].append(category_info)
    output_file = os.path.join(output_dir, output_file)
    with open(output_file, 'w') as f:
        output_json = json.dumps(output_json_dict)
        f.write(output_json)
def widerface_to_cocojson(root_path):
    train_gt_txt = os.path.join(root_path, "wider_face_split", "wider_face_train_bbx_gt.txt")
    val_gt_txt = os.path.join(root_path, "wider_face_split", "wider_face_val_bbx_gt.txt")
    train_img_dir = os.path.join(root_path, "WIDER_train", "images")
    val_img_dir = os.path.join(root_path, "WIDER_val", "images")
    assert train_gt_txt
    assert val_gt_txt
    assert train_img_dir
    assert val_img_dir
    save_path = os.path.join(root_path, "widerface_train.json")
    widerface_convert(train_gt_txt, train_img_dir, save_path)
    print("Wider Face train dataset converts sucess, the json path: {}".format(save_path))
    save_path = os.path.join(root_path, "widerface_val.json")
    widerface_convert(val_gt_txt, val_img_dir, save_path)
    print("Wider Face val dataset converts sucess, the json path: {}".format(save_path))
def widerface_convert(gt_txt, img_dir, save_path):
    output_json_dict = {
        "images": [],
        "type": "instances",
        "annotations": [],
        "categories": [{'supercategory': 'none', 'id': 0, 'name': "human_face"}]
    }
    bnd_id = 1  # bounding box start id
    im_id = 0
    print('Start converting !')
    with open(gt_txt) as fd:
        lines = fd.readlines()
    i = 0
    while i < len(lines):
        image_name = lines[i].strip()
        bbox_num = int(lines[i + 1].strip())
        i += 2
        img_info = get_widerface_image_info(img_dir, image_name, im_id)
        if img_info:
            output_json_dict["images"].append(img_info)
            for j in range(i, i + bbox_num):
                anno = get_widerface_ann_info(lines[j])
                anno.update({'image_id': im_id, 'id': bnd_id})
                output_json_dict['annotations'].append(anno)
                bnd_id += 1
        else:
            print("The image dose not exist: {}".format(os.path.join(img_dir, image_name)))
        bbox_num = 1 if bbox_num == 0 else bbox_num
        i += bbox_num
        im_id += 1
    with open(save_path, 'w') as f:
        output_json = json.dumps(output_json_dict)
        f.write(output_json)
def get_widerface_image_info(img_root, img_relative_path, img_id):
    image_info = {}
    save_path = os.path.join(img_root, img_relative_path)
    if os.path.exists(save_path):
        img = cv2.imread(save_path)
        image_info["file_name"] = os.path.join(os.path.basename(
            os.path.dirname(img_root)), os.path.basename(img_root),
            img_relative_path)
        image_info["height"] = img.shape[0]
        image_info["width"] = img.shape[1]
        image_info["id"] = img_id
    return image_info
def get_widerface_ann_info(info):
    info = [int(x) for x in info.strip().split()]
    anno = {
        'area': info[2] * info[3],
        'iscrowd': 0,
        'bbox': [info[0], info[1], info[2], info[3]],
        'category_id': 0,
        'ignore': 0,
        'blur': info[4],
        'expression': info[5],
        'illumination': info[6],
        'invalid': info[7],
        'occlusion': info[8],
        'pose': info[9]
    }
    return anno
def main():
    parser = argparse.ArgumentParser(
        formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument(
        '--dataset_type',
        help='the type of dataset, can be `voc`, `widerface`, `labelme` or `cityscape`')
    parser.add_argument('--json_input_dir', help='input annotated directory')
    parser.add_argument('--image_input_dir', help='image directory')
    parser.add_argument(
        '--output_dir', help='output dataset directory', default='./')
    parser.add_argument(
        '--train_proportion',
        help='the proportion of train dataset',
        type=float,
        default=1.0)
    parser.add_argument(
        '--val_proportion',
        help='the proportion of validation dataset',
        type=float,
        default=0.0)
    parser.add_argument(
        '--test_proportion',
        help='the proportion of test dataset',
        type=float,
        default=0.0)
    parser.add_argument(
        '--voc_anno_dir',
        help='In Voc format dataset, path to annotation files directory.',
        type=str,
        default=None)
    parser.add_argument(
        '--voc_anno_list',
        help='In Voc format dataset, path to annotation files ids list.',
        type=str,
        default=None)
    parser.add_argument(
        '--voc_label_list',
        help='In Voc format dataset, path to label list. The content of each line is a category.',
        type=str,
        default=None)
    parser.add_argument(
        '--voc_out_name',
        type=str,
        default='voc.json',
        help='In Voc format dataset, path to output json file')
    parser.add_argument(
        '--widerface_root_dir',
        help='The root_path for wider face dataset, which contains `wider_face_split`, `WIDER_train` and `WIDER_val`.And the json file will save in this path',
        type=str,
        default=None)
    args = parser.parse_args()
    try:
        assert args.dataset_type in ['voc', 'labelme', 'cityscape', 'widerface']
    except AssertionError as e:
        print(
            'Now only support the voc, cityscape dataset and labelme dataset!!')
        os._exit(0)
    if args.dataset_type == 'voc':
        assert args.voc_anno_dir and args.voc_anno_list and args.voc_label_list
        label2id, ann_paths = voc_get_label_anno(
            args.voc_anno_dir, args.voc_anno_list, args.voc_label_list)
        voc_xmls_to_cocojson(
            annotation_paths=ann_paths,
            label2id=label2id,
            output_dir=args.output_dir,
            output_file=args.voc_out_name)
    elif args.dataset_type == "widerface":
        assert args.widerface_root_dir
        widerface_to_cocojson(args.widerface_root_dir)
    else:
        try:
            assert os.path.exists(args.json_input_dir)
        except AssertionError as e:
            print('The json folder does not exist!')
            os._exit(0)
        try:
            assert os.path.exists(args.image_input_dir)
        except AssertionError as e:
            print('The image folder does not exist!')
            os._exit(0)
        try:
            assert abs(args.train_proportion + args.val_proportion \
                    + args.test_proportion - 1.0) < 1e-5
        except AssertionError as e:
            print(
                'The sum of pqoportion of training, validation and test datase must be 1!'
            )
            os._exit(0)
        # Allocate the dataset.
        total_num = len(glob.glob(osp.join(args.json_input_dir, '*.json')))
        if args.train_proportion != 0:
            train_num = int(total_num * args.train_proportion)
            out_dir = args.output_dir + '/train'
            if not os.path.exists(out_dir):
                os.makedirs(out_dir)
        else:
            train_num = 0
        if args.val_proportion == 0.0:
            val_num = 0
            test_num = total_num - train_num
            out_dir = args.output_dir + '/test'
            if args.test_proportion != 0.0 and not os.path.exists(out_dir):
                os.makedirs(out_dir)
        else:
            val_num = int(total_num * args.val_proportion)
            test_num = total_num - train_num - val_num
            val_out_dir = args.output_dir + '/val'
            if not os.path.exists(val_out_dir):
                os.makedirs(val_out_dir)
            test_out_dir = args.output_dir + '/test'
            if args.test_proportion != 0.0 and not os.path.exists(test_out_dir):
                os.makedirs(test_out_dir)
        count = 1
        for img_name in os.listdir(args.image_input_dir):
            if count <= train_num:
                if osp.exists(args.output_dir + '/train/'):
                    shutil.copyfile(
                        osp.join(args.image_input_dir, img_name),
                        osp.join(args.output_dir + '/train/', img_name))
            else:
                if count <= train_num + val_num:
                    if osp.exists(args.output_dir + '/val/'):
                        shutil.copyfile(
                            osp.join(args.image_input_dir, img_name),
                            osp.join(args.output_dir + '/val/', img_name))
                else:
                    if osp.exists(args.output_dir + '/test/'):
                        shutil.copyfile(
                            osp.join(args.image_input_dir, img_name),
                            osp.join(args.output_dir + '/test/', img_name))
            count = count + 1
        # Deal with the json files.
        if not os.path.exists(args.output_dir + '/annotations'):
            os.makedirs(args.output_dir + '/annotations')
        if args.train_proportion != 0:
            train_data_coco = deal_json(args.dataset_type,
                                        args.output_dir + '/train',
                                        args.json_input_dir)
            train_json_path = osp.join(args.output_dir + '/annotations',
                                       'instance_train.json')
            json.dump(
                train_data_coco,
                open(train_json_path, 'w'),
                indent=4,
                cls=MyEncoder)
        if args.val_proportion != 0:
            val_data_coco = deal_json(args.dataset_type,
                                      args.output_dir + '/val',
                                      args.json_input_dir)
            val_json_path = osp.join(args.output_dir + '/annotations',
                                     'instance_val.json')
            json.dump(
                val_data_coco,
                open(val_json_path, 'w'),
                indent=4,
                cls=MyEncoder)
        if args.test_proportion != 0:
            test_data_coco = deal_json(args.dataset_type,
                                       args.output_dir + '/test',
                                       args.json_input_dir)
            test_json_path = osp.join(args.output_dir + '/annotations',
                                      'instance_test.json')
            json.dump(
                test_data_coco,
                open(test_json_path, 'w'),
                indent=4,
                cls=MyEncoder)
if __name__ == '__main__':
    main()


使用命令:


python tools/x2coco.py \
                --dataset_type labelme \
                --json_input_dir MyDataset/annotations \
                --image_input_dir MyDataset/JPEGImages \
                --output_dir MyDataset \
                --train_proportion 0.7 \
                --val_proportion 0.2 \
                --test_proportion 0.1


2.2.2 paddlex实现


paddlex --split_dataset --format COCO --dataset_dir MyDataset --val_value 0.2 --test_value 0.1
• 1


3.voc转coco


使用命令


python tools/x2coco.py \
        --dataset_type voc \
        --voc_anno_dir MyDataset/Annotations/ \
        --voc_anno_list MyDataset/train.txt \
        --voc_label_list MyDataset/label_list.txt \
        --voc_out_name MyDataset/coco/voc_train.json


得到json文件后,进行划分


import os
from PIL import Image
'''
按照Main下面的trainval.txt,test.txt划分JPEGImages下的图片
划分为train2017,val2017
'''
def voc2yolo(train_txt_path, val_txt_path, image_dir_path, train_image_save_path, val_image_save_path):
    '
    :param train_txt_path: train.txt文件路径
    :param val_txt_path:   test.txt文件路径
    :param image_dir_path: VOC数据集下保存图片的文件夹路径
    :param train_image_save_path: 训练图片需要保存的文件夹
    :param val_image_save_path:   测试图片需要保存的文件夹
    :return:
    '
    #按照train.txt和val.txt将图片放到images文件夹下的train2017和val2017文件夹下
    train_list = []
    with open(train_txt_path,"r") as f:
        for line in f:
            train_list.append(line[:-1])
    # print(train_list)
    #
    val_list = []
    with open(val_txt_path, "r") as f:
        for line in f:
            val_list.append(line[:-1])
    # print(val_list)
    all_images_list = []
    for image in os.listdir(image_dir_path):
        new_image = image.split(".")[0]
        all_images_list.append(new_image)
        img = Image.open(os.path.join(image_dir_path, image))
        if new_image in train_list:
            if not os.path.exists(train_image_save_path):
                os.makedirs(train_image_save_path)
            img.save(os.path.join(train_image_save_path,image))
        else:
            if not os.path.exists(val_image_save_path):
                os.makedirs(val_image_save_path)
            img.save(os.path.join(val_image_save_path,image))
    # print(all_images_list)
if __name__ == "__main__":
    voc2yolo("MyDataset/train.txt",
             "MyDataset/test.txt",
             "MyDataset/JPEGImages",
             "MyDataset/coco/train2017",
             "MyDataset/coco/val2017")
相关文章
|
2月前
|
存储 监控 算法
员工上网行为监控中的Go语言算法:布隆过滤器的应用
在信息化高速发展的时代,企业上网行为监管至关重要。布隆过滤器作为一种高效、节省空间的概率性数据结构,适用于大规模URL查询与匹配,是实现精准上网行为管理的理想选择。本文探讨了布隆过滤器的原理及其优缺点,并展示了如何使用Go语言实现该算法,以提升企业网络管理效率和安全性。尽管存在误报等局限性,但合理配置下,布隆过滤器为企业提供了经济有效的解决方案。
95 8
员工上网行为监控中的Go语言算法:布隆过滤器的应用
|
1天前
|
算法 Serverless 数据处理
从集思录可转债数据探秘:Python与C++实现的移动平均算法应用
本文探讨了如何利用移动平均算法分析集思录提供的可转债数据,帮助投资者把握价格趋势。通过Python和C++两种编程语言实现简单移动平均(SMA),展示了数据处理的具体方法。Python代码借助`pandas`库轻松计算5日SMA,而C++代码则通过高效的数据处理展示了SMA的计算过程。集思录平台提供了详尽且及时的可转债数据,助力投资者结合算法与社区讨论,做出更明智的投资决策。掌握这些工具和技术,有助于在复杂多变的金融市场中挖掘更多价值。
22 12
|
1天前
|
机器学习/深度学习 人工智能 自然语言处理
解锁机器学习的新维度:元学习的算法与应用探秘
元学习作为一个重要的研究领域,正逐渐在多个应用领域展现其潜力。通过理解和应用元学习的基本算法,研究者可以更好地解决在样本不足或任务快速变化的情况下的学习问题。随着研究的深入,元学习有望在人工智能的未来发展中发挥更大的作用。
|
2月前
|
存储 缓存 算法
探索企业文件管理软件:Python中的哈希表算法应用
企业文件管理软件依赖哈希表实现高效的数据管理和安全保障。哈希表通过键值映射,提供平均O(1)时间复杂度的快速访问,适用于海量文件处理。在Python中,字典类型基于哈希表实现,可用于管理文件元数据、缓存机制、版本控制及快速搜索等功能,极大提升工作效率和数据安全性。
74 0
|
3月前
|
机器学习/深度学习 算法 数据挖掘
C语言在机器学习中的应用及其重要性。C语言以其高效性、灵活性和可移植性,适合开发高性能的机器学习算法,尤其在底层算法实现、嵌入式系统和高性能计算中表现突出
本文探讨了C语言在机器学习中的应用及其重要性。C语言以其高效性、灵活性和可移植性,适合开发高性能的机器学习算法,尤其在底层算法实现、嵌入式系统和高性能计算中表现突出。文章还介绍了C语言在知名机器学习库中的作用,以及与Python等语言结合使用的案例,展望了其未来发展的挑战与机遇。
78 1
|
3月前
|
并行计算 算法 测试技术
C语言因高效灵活被广泛应用于软件开发。本文探讨了优化C语言程序性能的策略,涵盖算法优化、代码结构优化、内存管理优化、编译器优化、数据结构优化、并行计算优化及性能测试与分析七个方面
C语言因高效灵活被广泛应用于软件开发。本文探讨了优化C语言程序性能的策略,涵盖算法优化、代码结构优化、内存管理优化、编译器优化、数据结构优化、并行计算优化及性能测试与分析七个方面,旨在通过综合策略提升程序性能,满足实际需求。
99 1
|
3月前
|
机器学习/深度学习 人工智能 算法
探索人工智能中的强化学习:原理、算法及应用
探索人工智能中的强化学习:原理、算法及应用
|
9月前
|
机器学习/深度学习 计算机视觉
AIGC核心技术——计算机视觉(CV)预训练大模型
【1月更文挑战第13天】AIGC核心技术——计算机视觉(CV)预训练大模型
708 3
AIGC核心技术——计算机视觉(CV)预训练大模型
|
5月前
|
人工智能 测试技术 API
AI计算机视觉笔记二十 九:yolov10竹签模型,自动数竹签
本文介绍了如何在AutoDL平台上搭建YOLOv10环境并进行竹签检测与计数。首先从官网下载YOLOv10源码并创建虚拟环境,安装依赖库。接着通过官方模型测试环境是否正常工作。然后下载自定义数据集并配置`mycoco128.yaml`文件,使用`yolo detect train`命令或Python代码进行训练。最后,通过命令行或API调用测试训练结果,并展示竹签计数功能。如需转载,请注明原文出处。
|
5月前
|
人工智能 测试技术 PyTorch
AI计算机视觉笔记二十四:YOLOP 训练+测试+模型评估
本文介绍了通过正点原子的ATK-3568了解并实现YOLOP(You Only Look Once for Panoptic Driving Perception)的过程,包括训练、测试、转换为ONNX格式及在ONNX Runtime上的部署。YOLOP由华中科技大学团队于2021年发布,可在Jetson TX2上达到23FPS,实现了目标检测、可行驶区域分割和车道线检测的多任务学习。文章详细记录了环境搭建、训练数据准备、模型转换和测试等步骤,并解决了ONNX转换过程中的问题。