MOT_Challenge官方代码TrackEval介绍
作者声明
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
原文链接:凤⭐尘 》》https://www.cnblogs.com/phoenixash/
MOTChallenge 介绍:
MOTChallenge是多目标跟踪领域最为常用的benchmark
CLEAR MOT Metrics认为一个好的多目标跟踪器应该有如下三点特性:
1.所有出现的目标都要能够及时找到(检测的性能)
2.找到目标位置要尽可能可真实目标位置一致(检测的性能)
3.保持追踪一致性,避免跟踪目标的跳变 (匹配的性能)
MOTChallenge的评价指标一共有十一个,分别是:
Measure | Better | Perfect | Description |
MOTA | higher | 100% | 跟踪的准确度,和出现FN,FP,IDs的数量负相关,可能出现负值。 |
MOTP | higher | 100% | 跟踪的精度,GT和检测的bbox的匹配交叠 |
IDF1 | higher | 100% | 引入track ID的F1 |
FAF | lower | 0 | 每帧的平均误报警数 |
MT | higher | 100% | 命中的轨迹占总轨迹的占比,定义命中的轨迹为长度小于ground truth 80%的轨迹 |
ML | lower | 0 | 丢失的轨迹占总轨迹的占比,定义丢失轨迹为长度小于ground truth 20%的轨迹 |
FP | lower | 0 | FP的总数量,false positives也就是误检 |
FN | lower | 0 | FN的总数量,false negatives也就是漏检 |
IDs | lower | 0 | ID改变的总数量 |
Frag | lower | 0 | 轨迹被打断的总数量 |
Hz | higher | Inf | 处理速度,不包括检测器的耗时,而且这个指标由作者提供,MOTChallenge是计算不出来的,因为递交的是offline文件。 |
TrackEval 介绍:
provides code for a number of different tracking evaluation metrics.
**源码地址:**https://github.com/JonathonLuiten/TrackEval
**安装环境:**常见的数据处理环境
The code is written 100% in python with only numpy and scipy as minimum requirements.
**数据格式:**官网上下载data样例,放在TrackEval/
下,从样例中可以看到评估支持多种常用的跟踪数据集
TrackEval 使用 - MOTA评估
以MOTA指标为例,通过自己的验证集,介绍TrackEval的使用。
首先data文件夹下,gt包含每个数据集的标注,gt/mot_challenge中包含训练集、验证集的标注,测试集没有标注。
data |-- gt | |-- bdd100k | |-- mot_challenge | |-- ...... | `-- youtube_vis `-- trackers |-- bdd100k |-- mot_challenge |-- ...... `-- youtube_vis
data/gt/mot_challenge/seqmaps里面存储的是每个数据需要测试的视频序列名称,因为mot17包含3种检测器的视频序列,通常我们只使用一个检测器的视频序列,因为gt的标注都一样。这里如果只使用SDP检测的视频序列,当评估mot17验证集时,需要把其他检测器的视频序列名称删除。
1、修改需要评估的视频序列名称
例如:
修改源文件MOT17-train内容:data/gt/mot_challenge/seqmaps/MOT17-train.txt
name MOT17-02-SDP MOT17-04-SDP MOT17-05-SDP MOT17-09-SDP MOT17-10-SDP MOT17-11-SDP MOT17-13-SDP
我的是:
name 001 002 003 004 ... 011
2、添加验证集标签
因为不想过多修改原代码,这里删除MOT16-val下的文件,放入自己的验证集真值
data |-- gt |-- mot_challenge |-- MOT16-val |-- 001 | |-- gt.txt | `-- seqinfo.ini |-- 002 | |-- ...... |... |-- 011 |-- gt.txt `--seqinfo.ini
3、添加跟踪的结果
在data/trackers/mot_challenge/MOT16-val
文件夹下创建yourtrack/data
文件夹保存每个视频跟踪的结果。例如:验证集的结果放入data/trackers/mot_challenge/MOT16-val/ICPR2022/data
trackers |-- mot_challenge |-- MOT16-val |-- ICPR2022 |-- data |-- 001.txt |-- ... |-- 010.txt `-- 011.txt
4、运行检测脚本
进入TrackEval/scripts
python run_mot_challenge.py --BENCHMARK MOT17 --TRACKERS_TO_EVAL yourtracker --METRICS CLEAR Identity --USE_PARALLEL True --GT_LOC_FORMAT {gt_folder}/{seq}/gt/gt_val_half.txt
5、修改检测脚本
也可以自己根据自己的数据集格式修改检测脚本:
TrackEval/scripts
的每个文件头部都有Command Line Arguments: Defaults
介绍参数含义。
在TrackEval/trackeval/datasets/
文件夹下复制一个自己的数据集对应配置文件如sat_challenge_2d_box.py并修改:
1、默认配置函数:
CLASSES_TO_EVAL,BENCHMARK,SPLIT_TO_EVAL, GT_LOC_FORMAT等
@staticmethod def get_default_dataset_config(): """Default class config values""" code_path = utils.get_code_path() default_config = { 'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'), # Location of GT data 'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'), # Trackers location 'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER) 'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder) 'CLASSES_TO_EVAL': ['car'], # Valid: ['pedestrian'] 'BENCHMARK': 'MOT16', # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15' 'SPLIT_TO_EVAL': 'val', # Valid: 'train', 'test', 'all' 'INPUT_AS_ZIP': False, # Whether tracker input files are zipped 'PRINT_CONFIG': True, # Whether to print current config 'DO_PREPROC': True, # Whether to perform preprocessing (never done for MOT15) 'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER 'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER 'TRACKER_DISPLAY_NAMES': None, # Names of trackers to display, if None: TRACKERS_TO_EVAL 'SEQMAP_FOLDER': None, # Where seqmaps are found (if None, GT_FOLDER/seqmaps) 'SEQMAP_FILE': None, # Directly specify seqmap file (if none use seqmap_folder/benchmark-split_to_eval) 'SEQ_INFO': None, # If not None, directly specify sequences to eval and their number of timesteps 'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt', # '{gt_folder}/{seq}/gt/gt.txt' 'SKIP_SPLIT_FOL': False, # If False, data is in GT_FOLDER/BENCHMARK-SPLIT_TO_EVAL/ and in # TRACKERS_FOLDER/BENCHMARK-SPLIT_TO_EVAL/tracker/ # If True, then the middle 'benchmark-split' folder is skipped for both. } return default_config
2、初始化函数:
self.valid_classes = [‘car’,‘airplane’,‘ship’,‘train’]
self.class_name_to_class_id = {‘car’: 1, ‘airplane’: 2, ‘ship’: 3, ‘train’: 4}
def __init__(self, config=None): """Initialise dataset, checking that all required files are present""" super().__init__() # Fill non-given config values with defaults self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name()) self.benchmark = self.config['BENCHMARK'] gt_set = self.config['BENCHMARK'] + '-' + self.config['SPLIT_TO_EVAL'] self.gt_set = gt_set if not self.config['SKIP_SPLIT_FOL']: split_fol = gt_set else: split_fol = '' self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol) self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol) self.should_classes_combine = False self.use_super_categories = False self.data_is_zipped = self.config['INPUT_AS_ZIP'] self.do_preproc = self.config['DO_PREPROC'] self.output_fol = self.config['OUTPUT_FOLDER'] if self.output_fol is None: self.output_fol = self.tracker_fol self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER'] self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER'] # Get classes to eval self.valid_classes = ['pedestrian'] self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None for cls in self.config['CLASSES_TO_EVAL']] if not all(self.class_list): raise TrackEvalException('Attempted to evaluate an invalid class. Only pedestrian class is valid.') self.class_name_to_class_id = {'pedestrian': 1, 'person_on_vehicle': 2, 'car': 3, 'bicycle': 4, 'motorbike': 5, 'non_mot_vehicle': 6, 'static_person': 7, 'distractor': 8, 'occluder': 9, 'occluder_on_ground': 10, 'occluder_full': 11, 'reflection': 12, 'crowd': 13} self.valid_class_numbers = list(self.class_name_to_class_id.values()) # Get sequences to eval and check gt files exist self.seq_list, self.seq_lengths = self._get_seq_info() if len(self.seq_list) < 1: raise TrackEvalException('No sequences are selected to be evaluated.') # Check gt files exist for seq in self.seq_list: if not self.data_is_zipped: curr_file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq) if not os.path.isfile(curr_file): print('GT file not found ' + curr_file) raise TrackEvalException('GT file not found for sequence: ' + seq) if self.data_is_zipped: curr_file = os.path.join(self.gt_fol, 'data.zip') if not os.path.isfile(curr_file): print('GT file not found ' + curr_file) raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file)) # Get trackers to eval if self.config['TRACKERS_TO_EVAL'] is None: self.tracker_list = os.listdir(self.tracker_fol) else: self.tracker_list = self.config['TRACKERS_TO_EVAL'] if self.config['TRACKER_DISPLAY_NAMES'] is None: self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list)) elif (self.config['TRACKERS_TO_EVAL'] is not None) and ( len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)): self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES'])) else: raise TrackEvalException('List of tracker files and tracker display names do not match.') for tracker in self.tracker_list: if self.data_is_zipped: curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip') if not os.path.isfile(curr_file): print('Tracker file not found: ' + curr_file) raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file)) else: for seq in self.seq_list: curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt') if not os.path.isfile(curr_file): print('Tracker file not found: ' + curr_file) raise TrackEvalException( 'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename( curr_file))
并在__init__.py
里添加from .sat_challenge_2d_box import SatChallenge2DBox
3、修改运行脚本
复制一份TrackEval/scripts/run_mot_challenge.py
为run_sat_challenge.py修改:
dataset_list = [trackeval.datasets.SatChallenge2DBox(dataset_config)]
freeze_support() # Command line interface: default_eval_config = trackeval.Evaluator.get_default_eval_config() default_eval_config['DISPLAY_LESS_PROGRESS'] = False default_dataset_config = trackeval.datasets.MotChallenge2DBox.get_default_dataset_config() default_metrics_config = {'METRICS': ['HOTA', 'CLEAR', 'Identity'], 'THRESHOLD': 0.5} config = {**default_eval_config, **default_dataset_config, **default_metrics_config} # Merge default configs parser = argparse.ArgumentParser() for setting in config.keys(): if type(config[setting]) == list or type(config[setting]) == type(None): parser.add_argument("--" + setting, nargs='+') else: parser.add_argument("--" + setting) args = parser.parse_args().__dict__ for setting in args.keys(): if args[setting] is not None: if type(config[setting]) == type(True): if args[setting] == 'True': x = True elif args[setting] == 'False': x = False else: raise Exception('Command line parameter ' + setting + 'must be True or False') elif type(config[setting]) == type(1): x = int(args[setting]) elif type(args[setting]) == type(None): x = None elif setting == 'SEQ_INFO': x = dict(zip(args[setting], [None]*len(args[setting]))) else: x = args[setting] config[setting] = x eval_config = {k: v for k, v in config.items() if k in default_eval_config.keys()} dataset_config = {k: v for k, v in config.items() if k in default_dataset_config.keys()} metrics_config = {k: v for k, v in config.items() if k in default_metrics_config.keys()} # Run code evaluator = trackeval.Evaluator(eval_config) dataset_list = [trackeval.datasets.MotChallenge2DBox(dataset_config)] metrics_list = [] for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity, trackeval.metrics.VACE]: if metric.get_name() in metrics_config['METRICS']: metrics_list.append(metric(metrics_config)) if len(metrics_list) == 0: raise Exception('No metrics selected for evaluation') evaluator.evaluate(dataset_list, metrics_list)
制作自己的MOT
https://blog.csdn.net/gubeiqing/article/details/123648141
References:
https://cloud.tencent.com/developer/article/1902308
https://zhuanlan.zhihu.com/p/391396206
AIEarth是一个由众多领域内专家博主共同打造的学术平台,旨在建设一个拥抱智慧未来的学术殿堂!【平台地址:https://devpress.csdn.net/aiearth】 很高兴认识你!加入我们共同进步!