基于飞桨实现乒乓球时序动作定位大赛

简介: 基于飞桨实现乒乓球时序动作定位大赛

基于飞桨实现乒乓球时序动作定位大赛:baseline模型现已开放,采用PaddleVideo中的BMN模型。欢迎开发者贡献更好的作品~


赛题介绍


时序动作定位(提案生成)是计算机视觉和视频分析领域一个具有的挑战性的任务。本次比赛不同于以往的ActivityNet-TAL,FineAction等视频时序检测动作定位比赛,我们采用了更精细的动作数据集--乒乓球转播画面,该数据集具有动作时间跨度短,分布密集等特点,给传统模型精确定位细粒度动作带来了很大挑战。本次比赛的任务即针对乒乓球转播画面视频面对镜头的运动员定位其挥拍动作(时序动作提案生成)。


竞赛数据集


数据集包含了19-21赛季兵乓球国际(世界杯、世锦赛、亚锦赛,奥运会)国内(全运会,乒超联赛)比赛标准单机位高清转播画面特征信息。其中包含912条视频特征文件,每个视频时长在0~6分钟不等,特征维度为2048,以pkl格式保存。我们对特征数据中面朝镜头的运动员的回合内挥拍动作进行了标注,单个动作时常在0~2秒不等,训练数据为729条标注视频,A测数据为91条视频,B测数据为92条视频,训练数据标签以json格式给出


  • 训练数据集与测试数据集的目录结构如下所示:
| - data
  | - data123004
    | - Features_competition_test_A.tar.gz
  | - data122998
    | - Features_competition_train.tar.gz
    | - label_cls14_train.json 
  • 本次比赛最新发布的数据集共包含训练集、A榜测试集、B榜测试集(第二阶段公布)三个部分,其中训练集共729个样本(视频),A榜测试集共91个样本,B榜测试集共92个样本;
  • Features目录中包含912条ppTSM抽取的视频特征,特征保存为pkl格式,文件名对应视频名称,读取pkl之后以(num_of_frames, 2048)向量形式代表单个视频特征,如下示例
{'image_feature': array([[-0.00178786, -0.00247065,  0.00754537, ..., -0.00248864,
        -0.00233971,  0.00536158],
       [-0.00212389, -0.00323782,  0.0198264 , ...,  0.00029546,
        -0.00265382,  0.01696528],
       [-0.00230571, -0.00363361,  0.01017699, ...,  0.00989012,
        -0.00283369,  0.01878656],
       ...,
       [-0.00126995,  0.01113492, -0.00036558, ...,  0.00343453,
        -0.00191288, -0.00117079],
       [-0.00129959,  0.01329842,  0.00051888, ...,  0.01843636,
        -0.00191984, -0.00067066],
       [-0.00134973,  0.02784026, -0.00212213, ...,  0.05027904,
        -0.00198008, -0.00054018]], dtype=float32)}
<class 'numpy.ndarray'>
  • 训练标签见如下格式:
# label_cls14_train.json
{
    'fps': 25,    #视频帧率
    'gts': [
        {
            'url': 'name_of_clip.mp4',      #名称
            'total_frames': 6341,    #总帧数(这里总帧数不代表实际视频帧数,以特征数据集维度信息为准)
            'actions': [
                {
                    "label_ids": [7],   #动作类型编号
                    "label_names": ["name_of_action"],     #动作类型
                    "start_id": 201,  #动作起始时间,单位为秒
                    "end_id": 111    #动作结束时间,单位为秒
                },
                ...
            ]
        },
        ...
    ]
}


数据集下载


数据集可以从比赛链接处下载,报名成功后,即可获取数据集下载链接。数据集下载完成后,可以将数据集上传到aistudio项目中,上传后的数据集路径在/home/aistudio/data目录下。


如果是直接fork的本项目,在/home/aistudio/data 目录下已经包含了下载好的训练数据和测试数据。

# 检查数据集所在路径
!tree -L 3 /home/aistudio/data 

创建训练和测试特征数据集存放文件夹,并解压缩相应数据到其中:

%cd /home/aistudio/data
# 创建特征数据集存放目录
%mkdir Features_train Features_test
# 分别解压缩训练和测试特征数据到各自目录下,解压完成后删除原压缩文件,节省空间
!tar -xf data122998/Features_competition_train.tar.gz -C /home/aistudio/data/Features_train --strip-components 1 && rm -rf data122998/Features_competition_train.tar.gz
!tar -xf data123004/Features_competition_test_A.tar.gz -C /home/aistudio/data/Features_test --strip-components 1 && rm -rf data123004/Features_competition_test_A.tar.gz 
# 复制训练标签到data目录下
%cp data122998/label_cls14_train.json /home/aistudio/data
/home/aistudio/data


Baseline模型BMN介绍


BMN模型是百度自研,2019年ActivityNet夺冠方案,为视频动作定位问题中proposal的生成提供高效的解决方案,在PaddlePaddle上首次开源。此模型引入边界匹配(Boundary-Matching, BM)机制来评估proposal的置信度,按照proposal开始边界的位置及其长度将所有可能存在的proposal组合成一个二维的BM置信度图,图中每个点的数值代表其所对应的proposal的置信度分数。网络由三个模块组成,基础模块作为主干网络处理输入的特征序列,TEM模块预测每一个时序位置属于动作开始、动作结束的概率,PEM模块生成BM置信度图。

image.png

具体模型设计可参考原论文,BMN: Boundary-Matching Network for Temporal Action Proposal Generation, Lin et al., Baidu Inc.


基于PaddleVideo的BMN模型训练


本项目基于PaddleVideo项目完成识别网络训练:

  • PaddleVideo "develop" branch github
  • PaddlePaddle-gpu==2.1.2


下载PaddleVideo代码


%cd ~/work/
# 从Github上下载PaddleVideo代码
#!git clone https://github.com/PaddlePaddle/PaddleVideo.git
# 若网速较慢,可使用如下方法下载
# !git clone https://hub.fastgit.org/PaddlePaddle/PaddleVideo.git
!git clone https://gitee.com/livingbody/paddle-video-202201-19.git --depth=1
/home/aistudio/work
Cloning into 'paddle-video-202201-19'...
remote: Enumerating objects: 1020, done.
remote: Counting objects: 100% (1020/1020), done.
remote: Compressing objects: 100% (847/847), done.
remote: Total 1020 (delta 244), reused 643 (delta 115), pack-reused 0
Receiving objects: 100% (1020/1020), 53.41 MiB | 3.95 MiB/s, done.
Resolving deltas: 100% (244/244), done.
Checking connectivity... done.


# 进入到PaddleVideo目录下
!mv ~/work/paddle-video-202201-19 ~/work/PaddleVideo
%cd ~/work/PaddleVideo/
/home/aistudio/work/PaddleVideo
# 检查源代码文件结构
!tree /home/aistudio/work/ -L 2
/home/aistudio/work/
└── PaddleVideo
    ├── applications
    ├── benchmark
    ├── configs
    ├── data
    ├── deploy
    ├── docs
    ├── __init__.py
    ├── LICENSE
    ├── main.py
    ├── MANIFEST.in
    ├── paddlevideo
    ├── README_en.md
    ├── README.md
    ├── requirements.txt
    ├── run.sh
    ├── setup.py
    ├── test_tipc
    └── tools
10 directories, 9 files


配置代码环境,安装依赖库


# 配置PaddleVideo环境
!python3.7 -m pip install --upgrade pip
!python3.7 -m pip install --upgrade -r requirements.txt
# 配置BMN前处理环境
# %cd ~/work/BMN/ 
# !python3.7 -m pip install --upgrade -r requirements.txt
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: pip in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (21.3.1)
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: numpy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 1)) (1.21.5)
Requirement already satisfied: pandas in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 2)) (1.3.5)
Requirement already satisfied: tqdm in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 3)) (4.62.3)
Requirement already satisfied: PyYAML>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 4)) (6.0)
Requirement already satisfied: opencv-python==4.2.0.32 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 5)) (4.2.0.32)
Requirement already satisfied: decord==0.4.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 6)) (0.4.2)
Requirement already satisfied: av==8.0.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 7)) (8.0.3)
Requirement already satisfied: scipy==1.6.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 8)) (1.6.3)
Collecting ffmpeg-python==0.2.0
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d7/0c/56be52741f75bad4dc6555991fabd2e07b432d333da82c11ad701123888a/ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Collecting scikit-image
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9a/44/8f8c7f9c9de7fde70587a656d7df7d056e6f05192a74491f7bc074a724d0/scikit_image-0.19.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (13.3 MB)
     |████████████████████████████████| 13.3 MB 3.9 MB/s            
[?25hRequirement already satisfied: matplotlib in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 11)) (2.2.3)
Collecting matplotlib
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/bb/eb/fb97c7c75755edf10128ae3f06ba15ffb97b852740ef9311f20113f018eb/matplotlib-3.5.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (11.2 MB)
     |████████████████████████████████| 11.2 MB 2.0 MB/s            
[?25hRequirement already satisfied: paddlenlp in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 12)) (2.1.1)
Collecting paddlenlp
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/17/9b/4535ccf0e96c302a3066bd2e4d0f44b6b1a73487c6793024475b48466c32/paddlenlp-2.2.3-py3-none-any.whl (1.2 MB)
     |████████████████████████████████| 1.2 MB 2.1 MB/s            
[?25hCollecting lmdb
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/4d/cf/3230b1c9b0bec406abb85a9332ba5805bdd03a1d24025c6bbcfb8ed71539/lmdb-1.3.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (298 kB)
     |████████████████████████████████| 298 kB 2.3 MB/s            
[?25hRequirement already satisfied: moviepy in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 14)) (1.0.1)
Collecting moviepy
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/18/54/01a8c4e35c75ca9724d19a7e4de9dc23f0ceb8769102c7de056113af61c3/moviepy-1.0.3.tar.gz (388 kB)
     |████████████████████████████████| 388 kB 1.8 MB/s            
[?25h  Preparing metadata (setup.py) ... [?25ldone
[?25hCollecting SimpleITK
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ad/ce/75d95eab0bc815f478b594add92090ff664e5cbc75ab9d8e2ffbaae1dc91/SimpleITK-2.1.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (48.4 MB)
     |████████████████████████████████| 48.4 MB 4.8 MB/s            
[?25hCollecting paddledet
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/bb/27/eb6714bdc12e73544617b1b4b3481ae70bec644a722bcbedd5743531db97/paddledet-2.3.0-py3-none-any.whl (589 kB)
     |████████████████████████████████| 589 kB 6.5 MB/s            
[?25hRequirement already satisfied: et_xmlfile in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 17)) (1.0.1)
Collecting et_xmlfile
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/96/c2/3dd434b0108730014f1b96fd286040dc3bcb70066346f7e01ec2ac95865f/et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB)
Collecting lap
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/bf/64/d9fb6a75b15e783952b2fec6970f033462e67db32dc43dfbb404c14e91c2/lap-0.4.0.tar.gz (1.5 MB)
     |████████████████████████████████| 1.5 MB 5.7 MB/s            
[?25h  Preparing metadata (setup.py) ... [?25ldone
[?25hCollecting motmetrics
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9c/28/9c3bc8e2a87f4c9e7b04ab72856ec7f9895a66681a65973ffaf9562ef879/motmetrics-1.2.0-py3-none-any.whl (151 kB)
     |████████████████████████████████| 151 kB 66 kB/s             
[?25hCollecting xmltodict
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/28/fd/30d5c1d3ac29ce229f6bdc40bbc20b28f716e8b363140c26eff19122d8a5/xmltodict-0.12.0-py2.py3-none-any.whl (9.2 kB)
Requirement already satisfied: openpyxl in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from -r requirements.txt (line 21)) (3.0.5)
Collecting openpyxl
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/1c/a6/8ce4d2ef2c29be3235c08bb00e0b81e29d38ebc47d82b17af681bf662b74/openpyxl-3.0.9-py2.py3-none-any.whl (242 kB)
     |████████████████████████████████| 242 kB 4.7 MB/s            
[?25hRequirement already satisfied: future in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from ffmpeg-python==0.2.0->-r requirements.txt (line 9)) (0.18.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pandas->-r requirements.txt (line 2)) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pandas->-r requirements.txt (line 2)) (2019.3)
Requirement already satisfied: pillow!=7.1.0,!=7.1.1,!=8.3.0,>=6.1.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-image->-r requirements.txt (line 10)) (7.1.2)
Collecting PyWavelets>=1.1.1
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a1/9c/564511b6e1c4e1d835ed2d146670436036960d09339a8fa2921fe42dad08/PyWavelets-1.2.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (6.1 MB)
     |████████████████████████████████| 6.1 MB 2.5 MB/s            
[?25hRequirement already satisfied: imageio>=2.4.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-image->-r requirements.txt (line 10)) (2.6.1)
Requirement already satisfied: packaging>=20.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-image->-r requirements.txt (line 10)) (21.3)
Collecting tifffile>=2019.7.26
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d8/38/85ae5ed77598ca90558c17a2f79ddaba33173b31cf8d8f545d34d9134f0d/tifffile-2021.11.2-py3-none-any.whl (178 kB)
     |████████████████████████████████| 178 kB 4.3 MB/s            
[?25hRequirement already satisfied: networkx>=2.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-image->-r requirements.txt (line 10)) (2.4)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 11)) (1.1.0)
Collecting fonttools>=4.22.0
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c0/77/6570a4cc3f706f1afb217a1603d1b05ebf8e259d5a04256904ef2575e108/fonttools-4.28.5-py3-none-any.whl (890 kB)
     |████████████████████████████████| 890 kB 4.0 MB/s            
[?25hRequirement already satisfied: cycler>=0.10 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 11)) (0.10.0)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from matplotlib->-r requirements.txt (line 11)) (3.0.6)
Requirement already satisfied: multiprocess in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp->-r requirements.txt (line 12)) (0.70.11.1)
Requirement already satisfied: seqeval in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp->-r requirements.txt (line 12)) (1.2.2)
Requirement already satisfied: jieba in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp->-r requirements.txt (line 12)) (0.42.1)
Requirement already satisfied: colorama in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp->-r requirements.txt (line 12)) (0.4.4)
Requirement already satisfied: colorlog in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp->-r requirements.txt (line 12)) (4.1.0)
Requirement already satisfied: h5py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddlenlp->-r requirements.txt (line 12)) (2.9.0)
Requirement already satisfied: decorator<5.0,>=4.0.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from moviepy->-r requirements.txt (line 14)) (4.4.2)
Requirement already satisfied: requests<3.0,>=2.8.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from moviepy->-r requirements.txt (line 14)) (2.22.0)
Requirement already satisfied: proglog<=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from moviepy->-r requirements.txt (line 14)) (0.1.9)
Requirement already satisfied: imageio_ffmpeg>=0.2.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from moviepy->-r requirements.txt (line 14)) (0.3.0)
Collecting cython-bbox
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/fa/b9/fc7d60e8c3b29cc0ff24a3bb3c4b7457e10b7610fbb2893741b623487b34/cython_bbox-0.1.3.tar.gz (41 kB)
     |████████████████████████████████| 41 kB 561 kB/s             
[?25h  Preparing metadata (setup.py) ... [?25ldone
[?25hCollecting pycocotools
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/75/5c/ac61ea715d7a89ecc31c090753bde28810238225ca8b71778dfe3e6a68bc/pycocotools-2.0.4.tar.gz (106 kB)
     |████████████████████████████████| 106 kB 43.0 MB/s            
[?25h  Installing build dependencies ... [?25ldone
[?25h  Getting requirements to build wheel ... [?25ldone
[?25h  Preparing metadata (pyproject.toml) ... [?25ldone
[?25hRequirement already satisfied: visualdl>=2.1.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddledet->-r requirements.txt (line 16)) (2.2.0)
Requirement already satisfied: setuptools>=42.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddledet->-r requirements.txt (line 16)) (56.2.0)
Collecting terminaltables
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/c4/fb/ea621e0a19733e01fe4005d46087d383693c0f4a8f824b47d8d4122c87e0/terminaltables-3.1.10-py2.py3-none-any.whl (15 kB)
Collecting shapely
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ae/20/33ce377bd24d122a4d54e22ae2c445b9b1be8240edb50040b40add950cd9/Shapely-1.8.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (1.1 MB)
     |████████████████████████████████| 1.1 MB 3.6 MB/s            
[?25hCollecting typeguard
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9a/bb/d43e5c75054e53efce310e79d63df0ac3f25e34c926be5dffb7d283fb2a8/typeguard-2.13.3-py3-none-any.whl (17 kB)
Requirement already satisfied: Cython in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddledet->-r requirements.txt (line 16)) (0.29)
Requirement already satisfied: sklearn in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from paddledet->-r requirements.txt (line 16)) (0.0)
Collecting pytest
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/40/76/86f886e750b81a4357b6ed606b2bcf0ce6d6c27ad3c09ebf63ed674fc86e/pytest-6.2.5-py3-none-any.whl (280 kB)
     |████████████████████████████████| 280 kB 2.9 MB/s            
[?25hCollecting flake8-import-order
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ab/52/cf2d6e2c505644ca06de2f6f3546f1e4f2b7be34246c9e0757c6048868f9/flake8_import_order-0.18.1-py2.py3-none-any.whl (15 kB)
Collecting pytest-benchmark
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/2c/60/423a63fb190a0483d049786a121bd3dfd7d93bb5ff1bb5b5cd13e5df99a7/pytest_benchmark-3.4.1-py2.py3-none-any.whl (50 kB)
     |████████████████████████████████| 50 kB 2.7 MB/s             
[?25hRequirement already satisfied: flake8 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from motmetrics->-r requirements.txt (line 19)) (4.0.1)
Requirement already satisfied: six in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from cycler>=0.10->matplotlib->-r requirements.txt (line 11)) (1.16.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests<3.0,>=2.8.1->moviepy->-r requirements.txt (line 14)) (1.25.6)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests<3.0,>=2.8.1->moviepy->-r requirements.txt (line 14)) (2019.9.11)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests<3.0,>=2.8.1->moviepy->-r requirements.txt (line 14)) (3.0.4)
Requirement already satisfied: idna<2.9,>=2.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from requests<3.0,>=2.8.1->moviepy->-r requirements.txt (line 14)) (2.8)
Requirement already satisfied: shellcheck-py in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (0.7.1.1)
Requirement already satisfied: Flask-Babel>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (1.0.0)
Requirement already satisfied: pre-commit in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (1.21.0)
Requirement already satisfied: protobuf>=3.11.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (3.14.0)
Requirement already satisfied: flask>=1.1.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (1.1.1)
Requirement already satisfied: bce-python-sdk in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (0.8.53)
Requirement already satisfied: mccabe<0.7.0,>=0.6.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8->motmetrics->-r requirements.txt (line 19)) (0.6.1)
Requirement already satisfied: pyflakes<2.5.0,>=2.4.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8->motmetrics->-r requirements.txt (line 19)) (2.4.0)
Requirement already satisfied: importlib-metadata<4.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8->motmetrics->-r requirements.txt (line 19)) (4.2.0)
Requirement already satisfied: pycodestyle<2.9.0,>=2.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flake8->motmetrics->-r requirements.txt (line 19)) (2.8.0)
Requirement already satisfied: dill>=0.3.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from multiprocess->paddlenlp->-r requirements.txt (line 12)) (0.3.3)
Collecting py>=1.8.2
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/f6/f0/10642828a8dfb741e5f3fbaac830550a518a775c7fff6f04a007259b0548/py-1.11.0-py2.py3-none-any.whl (98 kB)
     |████████████████████████████████| 98 kB 2.9 MB/s             
[?25hRequirement already satisfied: toml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest->motmetrics->-r requirements.txt (line 19)) (0.10.0)
Requirement already satisfied: pluggy<2.0,>=0.12 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest->motmetrics->-r requirements.txt (line 19)) (0.13.1)
Collecting iniconfig
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9b/dd/b3c12c6d707058fa947864b67f0c4e0c39ef8610988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB)
Requirement already satisfied: attrs>=19.2.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pytest->motmetrics->-r requirements.txt (line 19)) (21.2.0)
Collecting py-cpuinfo
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/e6/ba/77120e44cbe9719152415b97d5bfb29f4053ee987d6cb63f55ce7d50fadc/py-cpuinfo-8.0.0.tar.gz (99 kB)
     |████████████████████████████████| 99 kB 2.3 MB/s             
[?25h  Preparing metadata (setup.py) ... [?25ldone
[?25hRequirement already satisfied: scikit-learn>=0.21.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from seqeval->paddlenlp->-r requirements.txt (line 12)) (0.24.2)
Requirement already satisfied: Werkzeug>=0.15 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (0.16.0)
Requirement already satisfied: itsdangerous>=0.24 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (1.1.0)
Requirement already satisfied: click>=5.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (7.0)
Requirement already satisfied: Jinja2>=2.10.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from flask>=1.1.1->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (2.11.0)
Requirement already satisfied: Babel>=2.3 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Flask-Babel>=1.0.0->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (2.8.0)
Requirement already satisfied: zipp>=0.5 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from importlib-metadata<4.3->flake8->motmetrics->-r requirements.txt (line 19)) (3.6.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from importlib-metadata<4.3->flake8->motmetrics->-r requirements.txt (line 19)) (4.0.1)
Requirement already satisfied: joblib>=0.11 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp->-r requirements.txt (line 12)) (0.14.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from scikit-learn>=0.21.3->seqeval->paddlenlp->-r requirements.txt (line 12)) (2.1.0)
Requirement already satisfied: pycryptodome>=3.8.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from bce-python-sdk->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (3.9.9)
Requirement already satisfied: cfgv>=2.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (2.0.1)
Requirement already satisfied: aspy.yaml in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (1.3.0)
Requirement already satisfied: virtualenv>=15.2 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (16.7.9)
Requirement already satisfied: identify>=1.0.0 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (1.4.10)
Requirement already satisfied: nodeenv>=0.11.1 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from pre-commit->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (1.3.4)
Requirement already satisfied: MarkupSafe>=0.23 in /opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages (from Jinja2>=2.10.1->flask>=1.1.1->visualdl>=2.1.0->paddledet->-r requirements.txt (line 16)) (2.0.1)
Building wheels for collected packages: moviepy, lap, cython-bbox, pycocotools, py-cpuinfo
  Building wheel for moviepy (setup.py) ... [?25ldone
[?25h  Created wheel for moviepy: filename=moviepy-1.0.3-py3-none-any.whl size=110726 sha256=cb6a14e052e370d32621a7b6f8fa300b15eaac175457e3e38d90631ec5e9e8ad
  Stored in directory: /home/aistudio/.cache/pip/wheels/19/c5/56/7bd7ec94257958e82bbcf8d4eb85a5acfae1b5eb3f4f64128a
  Building wheel for lap (setup.py) ... [?25ldone
[?25h  Created wheel for lap: filename=lap-0.4.0-cp37-cp37m-linux_x86_64.whl size=1593908 sha256=6d618d2a451a561262660b55be75a91262017427add1c15e3a69c6ef6a35cc01
  Stored in directory: /home/aistudio/.cache/pip/wheels/5c/d0/d2/e331d17a999666b1e2eb99743cfa1742629f9d26c55c657001
  Building wheel for cython-bbox (setup.py) ... [?25ldone
[?25h  Created wheel for cython-bbox: filename=cython_bbox-0.1.3-cp37-cp37m-linux_x86_64.whl size=61627 sha256=11bb28018d1ab81dacdf31b29ac2b6a9cb11fc3f6501549f99f39440841bd87a
  Stored in directory: /home/aistudio/.cache/pip/wheels/3e/b3/6a/aae8832326545645e65d643a2aaf223ffa3a7d01e1a1bae01b
  Building wheel for pycocotools (pyproject.toml) ... [?25ldone
[?25h  Created wheel for pycocotools: filename=pycocotools-2.0.4-cp37-cp37m-linux_x86_64.whl size=273781 sha256=9a781f4b38eaa02a896f1ad97b35669c966585acf7ec1c1a60d3976886039a94
  Stored in directory: /home/aistudio/.cache/pip/wheels/c0/01/5f/670dfd20204fc9cc6bf843db4e014acb998f411922e3abc49f
  Building wheel for py-cpuinfo (setup.py) ... [?25ldone
[?25h  Created wheel for py-cpuinfo: filename=py_cpuinfo-8.0.0-py3-none-any.whl size=22245 sha256=0491b6e21ce781665048f3bbb6372874077858cb5c9c748d7aadbd04baf844ec
  Stored in directory: /home/aistudio/.cache/pip/wheels/88/c7/d0/6309c7cc9929894c11fe8e516c3e2a0d0a53ee4e198eac48b7
Successfully built moviepy lap cython-bbox pycocotools py-cpuinfo
Installing collected packages: py, iniconfig, pytest, py-cpuinfo, fonttools, xmltodict, pytest-benchmark, matplotlib, flake8-import-order, et-xmlfile, typeguard, tifffile, terminaltables, shapely, PyWavelets, pycocotools, openpyxl, motmetrics, lap, cython-bbox, SimpleITK, scikit-image, paddlenlp, paddledet, moviepy, lmdb, ffmpeg-python
  Attempting uninstall: matplotlib
    Found existing installation: matplotlib 2.2.3
    Uninstalling matplotlib-2.2.3:
      Successfully uninstalled matplotlib-2.2.3
  Attempting uninstall: et-xmlfile
    Found existing installation: et-xmlfile 1.0.1
    Uninstalling et-xmlfile-1.0.1:
      Successfully uninstalled et-xmlfile-1.0.1
  Attempting uninstall: openpyxl
    Found existing installation: openpyxl 3.0.5
    Uninstalling openpyxl-3.0.5:
      Successfully uninstalled openpyxl-3.0.5
  Attempting uninstall: paddlenlp
    Found existing installation: paddlenlp 2.1.1
    Uninstalling paddlenlp-2.1.1:
      Successfully uninstalled paddlenlp-2.1.1
  Attempting uninstall: moviepy
    Found existing installation: moviepy 1.0.1
    Uninstalling moviepy-1.0.1:
      Successfully uninstalled moviepy-1.0.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
blackhole 1.0.1 requires numpy<=1.19.5, but you have numpy 1.21.5 which is incompatible.
blackhole 1.0.1 requires pandas<=1.1.5,>=0.24.0, but you have pandas 1.3.5 which is incompatible.
Successfully installed PyWavelets-1.2.0 SimpleITK-2.1.1 cython-bbox-0.1.3 et-xmlfile-1.1.0 ffmpeg-python-0.2.0 flake8-import-order-0.18.1 fonttools-4.28.5 iniconfig-1.1.1 lap-0.4.0 lmdb-1.3.0 matplotlib-3.5.1 motmetrics-1.2.0 moviepy-1.0.3 openpyxl-3.0.9 paddledet-2.3.0 paddlenlp-2.2.3 py-1.11.0 py-cpuinfo-8.0.0 pycocotools-2.0.4 pytest-6.2.5 pytest-benchmark-3.4.1 scikit-image-0.19.1 shapely-1.8.0 terminaltables-3.1.10 tifffile-2021.11.2 typeguard-2.13.3 xmltodict-0.12.0

注:可忽略这里兼容性报错的报错


BMN训练数据处理


运行脚本get_instance_for_bmn.py,提取二分类的proposal,windows=8,根据gts和特征得到BMN训练所需要的数据集:

#数据格式
{
  "5679b8ad4eac486cbac82c4c496e278d_133.56_141.56": {     #视频名称_片段起始时间_片段结束时间(s)
          "duration_second": 8.0,
          "duration_frame": 200,
          "feature_frame": 200,
          "subset": "train",
          "annotations": [
              {
                  "segment": [
                      6.36,#动作起始时间
                      8.0  #动作结束时间
                  ],
                  "label": "11.0",
                  "label_name": "普通"
              }
          ]
      },
      ...
}

运行前更改输入输出路径:

dataset = "/home/aistudio/data"
feat_dir = dataset + '/Features_train'
out_dir = dataset + '/Input_for_bmn'
label_files = {
    'train': 'label_cls14_train.json',
    'validation': 'label_cls14_val.json'  
}
# 数据预处理
%cd /home/aistudio/work/PaddleVideo/applications/TableTennis/
# 生成验证集
!python3.7 val_split.py
# 生成bmn训练数据和标签
!python3.7 get_instance_for_bmn.py
/home/aistudio/work/PaddleVideo/applications/TableTennis
save feature for bmn ...
miss number (broken sample): 217

完成该步骤后,数据存储位置/home/aistudio/data/Input_for_bmn/

# 存放地址检查 -- BMN训练输入数据和标签
!tree /home/aistudio/data/Input_for_bmn/ -L 1
/home/aistudio/data/Input_for_bmn/
├── feature
└── label.json
1 directory, 1 file

训练BMN前矫正标签和数据是否一一对应,数据中一些无对应标签的feature将不参与训练; 运行前修改输入输出路径:

###
url = '/home/aistudio/data/Input_for_bmn/feature/'
###
###
with open('/home/aistudio/data/Input_for_bmn/label.json') as f:
    data = json.load(f)
###
###
jsonFile = open('/home/aistudio/data/Input_for_bmn/label_fixed.json', 'w')
###
# 运行标签修正脚本
!python3.7 fix_bad_label.py
Feature size: 19032
(Label) Original size: 19249
(Label) Deleted size: 217
(Label) Fixed size: 19032


BMN训练


数据准备完毕后,可以通过如下方式启动训练:

#4卡训练
%cd ~/work/PaddleVideo/
export CUDA_VISIBLE_DEVICES=0,1,2,3
python -B -m paddle.distributed.launch --gpus="0,1,2,3"  --log_dir=log_bmn main.py  --validate -c /home/aistudio/work/PaddleVideo/applications/TableTennis/configs/bmn_tabletennis.yaml

这里我们使用单卡训练:

python -B main.py  --validate -c /home/aistudio/work/PaddleVideo/applications/TableTennis/configs/bmn_tabletennis.yaml

训练前修改配置文件/home/aistudio/work/PaddleVideo/applications/TableTennis/configs/bmn_tabletennis.yaml中特征和标签的地址

###
DATASET:                                            #DATASET field
  batch_size: 4                                 #single card bacth size
  test_batch_size: 1
  num_workers: 8
  train:
    format: "BMNDataset"
    file_path: "/home/aistudio/data/Input_for_bmn/label_fixed.json"
    subset: "train"
###
###
PIPELINE:                                           #PIPELINE field
  train:                                            #Mandotary, indicate the pipeline to deal with the training data
    load_feat:
      name: "LoadFeat"
      feat_path: "/home/aistudio/data/Input_for_bmn/feature"
###
```python
%cd ~
!wget https://videotag.bj.bcebos.com/PaddleVideo-release2.2/BMN_TableTennis_baseline.pdparams
/home/aistudio
--2022-01-19 22:51:38--  https://videotag.bj.bcebos.com/PaddleVideo-release2.2/BMN_TableTennis_baseline.pdparams
Resolving videotag.bj.bcebos.com (videotag.bj.bcebos.com)... 182.61.200.229, 182.61.200.195, 2409:8c04:1001:1002:0:ff:b001:368a
Connecting to videotag.bj.bcebos.com (videotag.bj.bcebos.com)|182.61.200.229|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 12399633 (12M) [application/octet-stream]
Saving to: ‘BMN_TableTennis_baseline.pdparams’
BMN_TableTennis_bas 100%[===================>]  11.83M  13.4MB/s    in 0.9s    
2022-01-19 22:51:39 (13.4 MB/s) - ‘BMN_TableTennis_baseline.pdparams’ saved [12399633/12399633]
# 回到PaddleVideo主目录下
%cd ~/work/PaddleVideo/ 
# 运行训练脚本
!python3.7 -B main.py  --validate -c /home/aistudio/work/PaddleVideo/applications/TableTennis/configs/bmn_tabletennis.yaml
# !python3.7 -B main.py  --validate -c /home/aistudio/work/BMN/configs/bmn_tabletennis_v2.0.yaml -w /home/aistudio/BMN_TableTennis_baseline.pdparams
/home/aistudio/work/PaddleVideo
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlenlp/transformers/funnel/modeling.py:31: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable
[01/19 22:52:04] DALI is not installed, you can improve performance if use DALI
[01/19 22:52:05] DATASET : [01/19 22:52:05]     batch_size : 8[01/19 22:52:05]     num_workers : 8[01/19 22:52:05]     test : [01/19 22:52:05]         file_path : /home/aistudio/dataInput_for_bmn/label_fixed.json[01/19 22:52:05]         format : BMNDataset[01/19 22:52:05]         subset : validation[01/19 22:52:05]         test_mode : True[01/19 22:52:05]     test_batch_size : 1[01/19 22:52:05]     train : [01/19 22:52:05]         file_path : /home/aistudio/data/Input_for_bmn/label_fixed.json[01/19 22:52:05]         format : BMNDataset[01/19 22:52:05]         subset : train[01/19 22:52:05]     valid : [01/19 22:52:05]         file_path : /home/aistudio/data/Input_for_bmn/label_fixed.json[01/19 22:52:05]         format : BMNDataset[01/19 22:52:05]         subset : validation[01/19 22:52:05] ------------------------------------------------------------
[01/19 22:52:05] INFERENCE : [01/19 22:52:05]     dscale : 200[01/19 22:52:05]     feat_dim : 2048[01/19 22:52:05]     name : BMN_Inference_helper[01/19 22:52:05]     result_path : data/bmn/BMN_INFERENCE_results[01/19 22:52:05]     tscale : 200[01/19 22:52:05] ------------------------------------------------------------
[01/19 22:52:05] METRIC : [01/19 22:52:05]     dscale : 200[01/19 22:52:05]     file_path : /home/aistudio/data/Input_for_bmn/label_fixed.json[01/19 22:52:05]     ground_truth_filename : /home/aistudio/data/Input_for_bmn/label_gts.json[01/19 22:52:05]     name : BMNMetric[01/19 22:52:05]     output_path : data/bmn/BMN_Test_output[01/19 22:52:05]     result_path : data/bmn/BMN_Test_results[01/19 22:52:05]     subset : validation[01/19 22:52:05]     tscale : 200[01/19 22:52:05] ------------------------------------------------------------
[01/19 22:52:05] MODEL : [01/19 22:52:05]     backbone : [01/19 22:52:05]         dscale : 200[01/19 22:52:05]         feat_dim : 2048[01/19 22:52:05]         name : BMN[01/19 22:52:05]         num_sample : 32[01/19 22:52:05]         num_sample_perbin : 3[01/19 22:52:05]         prop_boundary_ratio : 0.5[01/19 22:52:05]         tscale : 200[01/19 22:52:05]     framework : BMNLocalizer[01/19 22:52:05]     loss : [01/19 22:52:05]         dscale : 200[01/19 22:52:05]         name : BMNLoss[01/19 22:52:05]         tscale : 200[01/19 22:52:05] ------------------------------------------------------------
[01/19 22:52:05] OPTIMIZER : [01/19 22:52:05]     learning_rate : [01/19 22:52:05]         boundaries : [4200][01/19 22:52:05]         iter_step : True[01/19 22:52:05]         name : CustomPiecewiseDecay[01/19 22:52:05]         values : [0.001, 0.0001][01/19 22:52:05]     name : Adam[01/19 22:52:05]     weight_decay : [01/19 22:52:05]         name : L2[01/19 22:52:05]         value : 0.0001[01/19 22:52:05] ------------------------------------------------------------
[01/19 22:52:05] PIPELINE : [01/19 22:52:05]     test : [01/19 22:52:05]         load_feat : [01/19 22:52:05]             feat_path : /home/aistudio/data/Input_for_bmn/feature[01/19 22:52:05]             name : LoadFeat[01/19 22:52:05]         transform : [01/19 22:52:05]             GetMatchMap : [01/19 22:52:05]                 tscale : 200[01/19 22:52:05]             GetVideoLabel : [01/19 22:52:05]                 dscale : 200[01/19 22:52:05]                 tscale : 200[01/19 22:52:05]     train : [01/19 22:52:05]         load_feat : [01/19 22:52:05]             feat_path : /home/aistudio/data/Input_for_bmn/feature[01/19 22:52:05]             name : LoadFeat[01/19 22:52:05]         transform : [01/19 22:52:05]             GetMatchMap : [01/19 22:52:05]                 tscale : 200[01/19 22:52:05]             GetVideoLabel : [01/19 22:52:05]                 dscale : 200[01/19 22:52:05]                 tscale : 200[01/19 22:52:05]     valid : [01/19 22:52:05]         load_feat : [01/19 22:52:05]             feat_path : /home/aistudio/data/Input_for_bmn/feature[01/19 22:52:05]             name : LoadFeat[01/19 22:52:05]         transform : [01/19 22:52:05]             GetMatchMap : [01/19 22:52:05]                 tscale : 200[01/19 22:52:05]             GetVideoLabel : [01/19 22:52:05]                 dscale : 200[01/19 22:52:05]                 tscale : 200[01/19 22:52:05] ------------------------------------------------------------
[01/19 22:52:05] epochs : 20[01/19 22:52:05] log_level : INFO[01/19 22:52:05] model_name : BMN[01/19 22:52:05] resume_from : W0119 22:52:05.378010  8439 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0119 22:52:05.383371  8439 device_context.cc:465] device: 0, cuDNN Version: 7.6.
[01/19 22:52:32] train subset video numbers: 18908
[01/19 22:52:32] validation subset video numbers: 124
[01/19 22:52:42] epoch:[  1/20 ] train step:0    loss: 2.62790 lr: 0.001000 batch_cost: 9.39885 sec, reader_cost: 4.12508 sec, ips: 0.85117 instance/sec.[01/19 22:53:33] epoch:[  1/20 ] train step:10   loss: 2.30798 lr: 0.001000 batch_cost: 5.12008 sec, reader_cost: 0.00040 sec, ips: 1.56248 instance/sec.[01/19 22:54:24] epoch:[  1/20 ] train step:20   loss: 1.91029 lr: 0.001000 batch_cost: 5.11993 sec, reader_cost: 0.00200 sec, ips: 1.56252 instance/sec.[01/19 22:55:15] epoch:[  1/20 ] train step:30   loss: 1.81100 lr: 0.001000 batch_cost: 5.11868 sec, reader_cost: 0.00200 sec, ips: 1.56290 instance/sec.[01/19 22:56:07] epoch:[  1/20 ] train step:40   loss: 1.51976 lr: 0.001000 batch_cost: 5.11893 sec, reader_cost: 0.00195 sec, ips: 1.56283 instance/sec.[01/19 22:56:58] epoch:[  1/20 ] train step:50   loss: 1.45636 lr: 0.001000 batch_cost: 5.11812 sec, reader_cost: 0.00027 sec, ips: 1.56307 instance/sec.[01/19 22:57:49] epoch:[  1/20 ] train step:60   loss: 1.59975 lr: 0.001000 batch_cost: 5.11963 sec, reader_cost: 0.00207 sec, ips: 1.56261 instance/sec.[01/19 22:58:40] epoch:[  1/20 ] train step:70   loss: 1.35803 lr: 0.001000 batch_cost: 5.11886 sec, reader_cost: 0.00193 sec, ips: 1.56285 instance/sec.[01/19 22:59:31] epoch:[  1/20 ] train step:80   loss: 2.00796 lr: 0.001000 batch_cost: 5.11760 sec, reader_cost: 0.00031 sec, ips: 1.56323 instance/sec.[01/19 23:00:23] epoch:[  1/20 ] train step:90   loss: 1.32180 lr: 0.001000 batch_cost: 5.11819 sec, reader_cost: 0.00204 sec, ips: 1.56305 instance/sec.[01/19 23:01:14] epoch:[  1/20 ] train step:100  loss: 1.45238 lr: 0.001000 batch_cost: 5.11975 sec, reader_cost: 0.00201 sec, ips: 1.56258 instance/sec.[01/19 23:02:05] epoch:[  1/20 ] train step:110  loss: 1.52273 lr: 0.001000 batch_cost: 5.13565 sec, reader_cost: 0.00525 sec, ips: 1.55774 instance/sec.[01/19 23:02:56] epoch:[  1/20 ] train step:120  loss: 1.12612 lr: 0.001000 batch_cost: 5.13551 sec, reader_cost: 0.00515 sec, ips: 1.55778 instance/sec.[01/19 23:03:48] epoch:[  1/20 ] train step:130  loss: 1.20517 lr: 0.001000 batch_cost: 5.14420 sec, reader_cost: 0.00726 sec, ips: 1.55515 instance/sec.[01/19 23:04:39] epoch:[  1/20 ] train step:140  loss: 1.16813 lr: 0.001000 batch_cost: 5.14747 sec, reader_cost: 0.00690 sec, ips: 1.55416 instance/sec.[01/19 23:05:31] epoch:[  1/20 ] train step:150  loss: 1.53607 lr: 0.001000 batch_cost: 5.13490 sec, reader_cost: 0.00370 sec, ips: 1.55797 instance/sec.[01/19 23:06:22] epoch:[  1/20 ] train step:160  loss: 1.07581 lr: 0.001000 batch_cost: 5.15009 sec, reader_cost: 0.00503 sec, ips: 1.55337 instance/sec.[01/19 23:07:14] epoch:[  1/20 ] train step:170  loss: 0.90816 lr: 0.001000 batch_cost: 5.15341 sec, reader_cost: 0.00482 sec, ips: 1.55237 instance/sec.[01/19 23:08:05] epoch:[  1/20 ] train step:180  loss: 0.86499 lr: 0.001000 batch_cost: 5.14566 sec, reader_cost: 0.00506 sec, ips: 1.55471 instance/sec.[01/19 23:08:56] epoch:[  1/20 ] train step:190  loss: 0.76900 lr: 0.001000 batch_cost: 5.13364 sec, reader_cost: 0.00061 sec, ips: 1.55835 instance/sec.[01/19 23:09:48] epoch:[  1/20 ] train step:200  loss: 1.11568 lr: 0.001000 batch_cost: 5.13944 sec, reader_cost: 0.00456 sec, ips: 1.55659 instance/sec.[01/19 23:10:39] epoch:[  1/20 ] train step:210  loss: 1.16260 lr: 0.001000 batch_cost: 5.15421 sec, reader_cost: 0.00449 sec, ips: 1.55213 instance/sec.[01/19 23:11:31] epoch:[  1/20 ] train step:220  loss: 1.00100 lr: 0.001000 batch_cost: 5.13346 sec, reader_cost: 0.00361 sec, ips: 1.55840 instance/sec.[01/19 23:12:22] epoch:[  1/20 ] train step:230  loss: 1.04553 lr: 0.001000 batch_cost: 5.13836 sec, reader_cost: 0.00349 sec, ips: 1.55692 instance/sec.[01/19 23:13:13] epoch:[  1/20 ] train step:240  loss: 0.97290 lr: 0.001000 batch_cost: 5.14260 sec, reader_cost: 0.00428 sec, ips: 1.55563 instance/sec.[01/19 23:14:05] epoch:[  1/20 ] train step:250  loss: 1.40828 lr: 0.001000 batch_cost: 5.14166 sec, reader_cost: 0.00451 sec, ips: 1.55592 instance/sec.[01/19 23:14:56] epoch:[  1/20 ] train step:260  loss: 0.85112 lr: 0.001000 batch_cost: 5.13413 sec, reader_cost: 0.00324 sec, ips: 1.55820 instance/sec.[01/19 23:15:48] epoch:[  1/20 ] train step:270  loss: 1.04997 lr: 0.001000 batch_cost: 5.11833 sec, reader_cost: 0.00193 sec, ips: 1.56301 instance/sec.[01/19 23:16:39] epoch:[  1/20 ] train step:280  loss: 0.97560 lr: 0.001000 batch_cost: 5.12043 sec, reader_cost: 0.00221 sec, ips: 1.56237 instance/sec.[01/19 23:17:30] epoch:[  1/20 ] train step:290  loss: 0.78149 lr: 0.001000 batch_cost: 5.11930 sec, reader_cost: 0.00167 sec, ips: 1.56271 instance/sec.[01/19 23:18:21] epoch:[  1/20 ] train step:300  loss: 1.21464 lr: 0.001000 batch_cost: 5.11785 sec, reader_cost: 0.00030 sec, ips: 1.56316 instance/sec.[01/19 23:19:12] epoch:[  1/20 ] train step:310  loss: 0.83037 lr: 0.001000 batch_cost: 5.11995 sec, reader_cost: 0.00199 sec, ips: 1.56251 instance/sec.[01/19 23:20:04] epoch:[  1/20 ] train step:320  loss: 0.99855 lr: 0.001000 batch_cost: 5.11977 sec, reader_cost: 0.00163 sec, ips: 1.56257 instance/sec.[01/19 23:20:55] epoch:[  1/20 ] train step:330  loss: 0.95017 lr: 0.001000 batch_cost: 5.11996 sec, reader_cost: 0.00222 sec, ips: 1.56251 instance/sec.[01/19 23:21:46] epoch:[  1/20 ] train step:340  loss: 1.36061 lr: 0.001000 batch_cost: 5.11860 sec, reader_cost: 0.00151 sec, ips: 1.56293 instance/sec.[01/19 23:22:37] epoch:[  1/20 ] train step:350  loss: 1.03819 lr: 0.001000 batch_cost: 5.11889 sec, reader_cost: 0.00163 sec, ips: 1.56284 instance/sec.[01/19 23:23:28] epoch:[  1/20 ] train step:360  loss: 0.87961 lr: 0.001000 batch_cost: 5.11837 sec, reader_cost: 0.00166 sec, ips: 1.56300 instance/sec.[01/19 23:24:20] epoch:[  1/20 ] train step:370  loss: 1.17447 lr: 0.001000 batch_cost: 5.11799 sec, reader_cost: 0.00026 sec, ips: 1.56311 instance/sec.[01/19 23:25:11] epoch:[  1/20 ] train step:380  loss: 1.15041 lr: 0.001000 batch_cost: 5.11937 sec, reader_cost: 0.00176 sec, ips: 1.56269 instance/sec.[01/19 23:26:02] epoch:[  1/20 ] train step:390  loss: 1.05656 lr: 0.001000 batch_cost: 5.11686 sec, reader_cost: 0.00034 sec, ips: 1.56346 instance/sec.[01/19 23:26:53] epoch:[  1/20 ] train step:400  loss: 1.09587 lr: 0.001000 batch_cost: 5.11953 sec, reader_cost: 0.00186 sec, ips: 1.56264 instance/sec.[01/19 23:27:44] epoch:[  1/20 ] train step:410  loss: 1.12685 lr: 0.001000 batch_cost: 5.11919 sec, reader_cost: 0.00193 sec, ips: 1.56275 instance/sec.[01/19 23:28:36] epoch:[  1/20 ] train step:420  loss: 0.96241 lr: 0.001000 batch_cost: 5.11832 sec, reader_cost: 0.00193 sec, ips: 1.56301 instance/sec.[01/19 23:29:27] epoch:[  1/20 ] train step:430  loss: 0.55252 lr: 0.001000 batch_cost: 5.11880 sec, reader_cost: 0.00190 sec, ips: 1.56287 instance/sec.[01/19 23:30:18] epoch:[  1/20 ] train step:440  loss: 0.86330 lr: 0.001000 batch_cost: 5.11906 sec, reader_cost: 0.00227 sec, ips: 1.56279 instance/sec.[01/19 23:31:09] epoch:[  1/20 ] train step:450  loss: 0.90199 lr: 0.001000 batch_cost: 5.11783 sec, reader_cost: 0.00180 sec, ips: 1.56316 instance/sec.[01/19 23:32:00] epoch:[  1/20 ] train step:460  loss: 2.48070 lr: 0.001000 batch_cost: 5.11912 sec, reader_cost: 0.00167 sec, ips: 1.56277 instance/sec.[01/19 23:32:51] epoch:[  1/20 ] train step:470  loss: 0.92760 lr: 0.001000 batch_cost: 5.11741 sec, reader_cost: 0.00030 sec, ips: 1.56329 instance/sec.[01/19 23:33:43] epoch:[  1/20 ] train step:480  loss: 0.72422 lr: 0.001000 batch_cost: 5.11974 sec, reader_cost: 0.00176 sec, ips: 1.56258 instance/sec.[01/19 23:34:34] epoch:[  1/20 ] train step:490  loss: 0.82027 lr: 0.001000 batch_cost: 5.11924 sec, reader_cost: 0.00212 sec, ips: 1.56273 instance/sec.[01/19 23:35:25] epoch:[  1/20 ] train step:500  loss: 1.06915 lr: 0.001000 batch_cost: 5.11903 sec, reader_cost: 0.00193 sec, ips: 1.56280 instance/sec.[01/19 23:36:16] epoch:[  1/20 ] train step:510  loss: 0.79950 lr: 0.001000 batch_cost: 5.11901 sec, reader_cost: 0.00203 sec, ips: 1.56280 instance/sec.[01/19 23:37:07] epoch:[  1/20 ] train step:520  loss: 0.70721 lr: 0.001000 batch_cost: 5.11923 sec, reader_cost: 0.00239 sec, ips: 1.56274 instance/sec.[01/19 23:37:59] epoch:[  1/20 ] train step:530  loss: 0.86619 lr: 0.001000 batch_cost: 5.11829 sec, reader_cost: 0.00035 sec, ips: 1.56302 instance/sec.[01/19 23:38:50] epoch:[  1/20 ] train step:540  loss: 1.07725 lr: 0.001000 batch_cost: 5.11846 sec, reader_cost: 0.00203 sec, ips: 1.56297 instance/sec.[01/19 23:39:41] epoch:[  1/20 ] train step:550  loss: 1.03794 lr: 0.001000 batch_cost: 5.11747 sec, reader_cost: 0.00028 sec, ips: 1.56327 instance/sec.[01/19 23:40:32] epoch:[  1/20 ] train step:560  loss: 0.88714 lr: 0.001000 batch_cost: 5.11862 sec, reader_cost: 0.00213 sec, ips: 1.56292 instance/sec.[01/19 23:41:23] epoch:[  1/20 ] train step:570  loss: 0.66824 lr: 0.001000 batch_cost: 5.11738 sec, reader_cost: 0.00030 sec, ips: 1.56330 instance/sec.[01/19 23:42:15] epoch:[  1/20 ] train step:580  loss: 1.31942 lr: 0.001000 batch_cost: 5.11939 sec, reader_cost: 0.00204 sec, ips: 1.56268 instance/sec.[01/19 23:43:06] epoch:[  1/20 ] train step:590  loss: 0.93436 lr: 0.001000 batch_cost: 5.18747 sec, reader_cost: 0.00189 sec, ips: 1.54218 instance/sec.[01/19 23:43:57] epoch:[  1/20 ] train step:600  loss: 0.80920 lr: 0.001000 batch_cost: 5.11949 sec, reader_cost: 0.00200 sec, ips: 1.56266 instance/sec.[01/19 23:44:48] epoch:[  1/20 ] train step:610  loss: 0.85647 lr: 0.001000 batch_cost: 5.11929 sec, reader_cost: 0.00194 sec, ips: 1.56272 instance/sec.[01/19 23:45:39] epoch:[  1/20 ] train step:620  loss: 1.24709 lr: 0.001000 batch_cost: 5.11760 sec, reader_cost: 0.00030 sec, ips: 1.56323 instance/sec.[01/19 23:46:31] epoch:[  1/20 ] train step:630  loss: 0.87058 lr: 0.001000 batch_cost: 5.11775 sec, reader_cost: 0.00033 sec, ips: 1.56319 instance/sec.[01/19 23:47:22] epoch:[  1/20 ] train step:640  loss: 0.64058 lr: 0.001000 batch_cost: 5.11971 sec, reader_cost: 0.00203 sec, ips: 1.56259 instance/sec.[01/19 23:48:13] epoch:[  1/20 ] train step:650  loss: 0.83430 lr: 0.001000 batch_cost: 5.12042 sec, reader_cost: 0.00212 sec, ips: 1.56237 instance/sec.[01/19 23:49:04] epoch:[  1/20 ] train step:660  loss: 0.91254 lr: 0.001000 batch_cost: 5.11796 sec, reader_cost: 0.00195 sec, ips: 1.56312 instance/sec.[01/19 23:49:55] epoch:[  1/20 ] train step:670  loss: 1.02349 lr: 0.001000 batch_cost: 5.11919 sec, reader_cost: 0.00197 sec, ips: 1.56275 instance/sec.[01/19 23:50:46] epoch:[  1/20 ] train step:680  loss: 0.82772 lr: 0.001000 batch_cost: 5.11908 sec, reader_cost: 0.00187 sec, ips: 1.56278 instance/sec.[01/19 23:51:38] epoch:[  1/20 ] train step:690  loss: 1.23924 lr: 0.001000 batch_cost: 5.13870 sec, reader_cost: 0.00657 sec, ips: 1.55682 instance/sec.[01/19 23:52:29] epoch:[  1/20 ] train step:700  loss: 0.79271 lr: 0.001000 batch_cost: 5.14537 sec, reader_cost: 0.00623 sec, ips: 1.55480 instance/sec.[01/19 23:53:21] epoch:[  1/20 ] train step:710  loss: 0.96034 lr: 0.001000 batch_cost: 5.13627 sec, reader_cost: 0.00499 sec, ips: 1.55755 instance/sec.[01/19 23:54:12] epoch:[  1/20 ] train step:720  loss: 0.88904 lr: 0.001000 batch_cost: 5.13441 sec, reader_cost: 0.00513 sec, ips: 1.55811 instance/sec.[01/19 23:55:03] epoch:[  1/20 ] train step:730  loss: 0.77712 lr: 0.001000 batch_cost: 5.13429 sec, reader_cost: 0.00618 sec, ips: 1.55815 instance/sec.[01/19 23:55:55] epoch:[  1/20 ] train step:740  loss: 0.76683 lr: 0.001000 batch_cost: 5.14214 sec, reader_cost: 0.00679 sec, ips: 1.55577 instance/sec.[01/19 23:56:46] epoch:[  1/20 ] train step:750  loss: 0.96976 lr: 0.001000 batch_cost: 5.15091 sec, reader_cost: 0.00369 sec, ips: 1.55312 instance/sec.[01/19 23:57:37] epoch:[  1/20 ] train step:760  loss: 0.76485 lr: 0.001000 batch_cost: 5.14176 sec, reader_cost: 0.00365 sec, ips: 1.55589 instance/sec.[01/19 23:58:29] epoch:[  1/20 ] train step:770  loss: 0.66935 lr: 0.001000 batch_cost: 5.14224 sec, reader_cost: 0.00430 sec, ips: 1.55574 instance/sec.[01/19 23:59:20] epoch:[  1/20 ] train step:780  loss: 2.89800 lr: 0.001000 batch_cost: 5.13969 sec, reader_cost: 0.00367 sec, ips: 1.55651 instance/sec.[01/20 00:00:12] epoch:[  1/20 ] train step:790  loss: 0.90938 lr: 0.001000 batch_cost: 5.13910 sec, reader_cost: 0.00478 sec, ips: 1.55669 instance/sec.[01/20 00:01:03] epoch:[  1/20 ] train step:800  loss: 0.91197 lr: 0.001000 batch_cost: 5.14010 sec, reader_cost: 0.00405 sec, ips: 1.55639 instance/sec.[01/20 00:01:55] epoch:[  1/20 ] train step:810  loss: 1.05801 lr: 0.001000 batch_cost: 5.15520 sec, reader_cost: 0.00609 sec, ips: 1.55183 instance/sec.[01/20 00:02:46] epoch:[  1/20 ] train step:820  loss: 1.08246 lr: 0.001000 batch_cost: 5.15243 sec, reader_cost: 0.00449 sec, ips: 1.55267 instance/sec.[01/20 00:03:37] epoch:[  1/20 ] train step:830  loss: 1.13079 lr: 0.001000 batch_cost: 5.14304 sec, reader_cost: 0.00166 sec, ips: 1.55550 instance/sec.[01/20 00:04:29] epoch:[  1/20 ] train step:840  loss: 0.67644 lr: 0.001000 batch_cost: 5.13438 sec, reader_cost: 0.00339 sec, ips: 1.55812 instance/sec.[01/20 00:05:20] epoch:[  1/20 ] train step:850  loss: 1.24107 lr: 0.001000 batch_cost: 5.14726 sec, reader_cost: 0.00511 sec, ips: 1.55423 instance/sec.[01/20 00:06:12] epoch:[  1/20 ] train step:860  loss: 0.87836 lr: 0.001000 batch_cost: 5.14604 sec, reader_cost: 0.00463 sec, ips: 1.55459 instance/sec.[01/20 00:07:03] epoch:[  1/20 ] train step:870  loss: 0.90324 lr: 0.001000 batch_cost: 5.14151 sec, reader_cost: 0.00426 sec, ips: 1.55596 instance/sec.[01/20 00:07:55] epoch:[  1/20 ] train step:880  loss: 0.71943 lr: 0.001000 batch_cost: 5.11998 sec, reader_cost: 0.00211 sec, ips: 1.56251 instance/sec.[01/20 00:08:46] epoch:[  1/20 ] train step:890  loss: 0.80321 lr: 0.001000 batch_cost: 5.11789 sec, reader_cost: 0.00028 sec, ips: 1.56315 instance/sec.[01/20 00:09:37] epoch:[  1/20 ] train step:900  loss: 0.60494 lr: 0.001000 batch_cost: 5.11917 sec, reader_cost: 0.00184 sec, ips: 1.56275 instance/sec.[01/20 00:10:28] epoch:[  1/20 ] train step:910  loss: 1.52767 lr: 0.001000 batch_cost: 5.11868 sec, reader_cost: 0.00033 sec, ips: 1.56290 instance/sec.[01/20 00:11:19] epoch:[  1/20 ] train step:920  loss: 1.29641 lr: 0.001000 batch_cost: 5.11850 sec, reader_cost: 0.00034 sec, ips: 1.56296 instance/sec.[01/20 00:12:10] epoch:[  1/20 ] train step:930  loss: 0.81458 lr: 0.001000 batch_cost: 5.12358 sec, reader_cost: 0.00031 sec, ips: 1.56141 instance/sec.[01/20 00:13:02] epoch:[  1/20 ] train step:940  loss: 0.72103 lr: 0.001000 batch_cost: 5.12027 sec, reader_cost: 0.00199 sec, ips: 1.56242 instance/sec.[01/20 00:13:53] epoch:[  1/20 ] train step:950  loss: 1.08844 lr: 0.001000 batch_cost: 5.12103 sec, reader_cost: 0.00076 sec, ips: 1.56219 instance/sec.[01/20 00:14:44] epoch:[  1/20 ] train step:960  loss: 0.80574 lr: 0.001000 batch_cost: 5.12080 sec, reader_cost: 0.00188 sec, ips: 1.56226 instance/sec.[01/20 00:15:35] epoch:[  1/20 ] train step:970  loss: 0.88313 lr: 0.001000 batch_cost: 5.12048 sec, reader_cost: 0.00179 sec, ips: 1.56235 instance/sec.[01/20 00:16:27] epoch:[  1/20 ] train step:980  loss: 1.00685 lr: 0.001000 batch_cost: 5.11898 sec, reader_cost: 0.00032 sec, ips: 1.56281 instance/sec.[01/20 00:17:18] epoch:[  1/20 ] train step:990  loss: 1.30667 lr: 0.001000 batch_cost: 5.11823 sec, reader_cost: 0.00036 sec, ips: 1.56304 instance/sec.[01/20 00:18:09] epoch:[  1/20 ] train step:1000 loss: 0.86812 lr: 0.001000 batch_cost: 5.11924 sec, reader_cost: 0.00028 sec, ips: 1.56273 instance/sec.[01/20 00:19:00] epoch:[  1/20 ] train step:1010 loss: 0.81535 lr: 0.001000 batch_cost: 5.11865 sec, reader_cost: 0.00028 sec, ips: 1.56291 instance/sec.[01/20 00:19:51] epoch:[  1/20 ] train step:1020 loss: 1.01592 lr: 0.001000 batch_cost: 5.11872 sec, reader_cost: 0.00031 sec, ips: 1.56289 instance/sec.[01/20 00:20:42] epoch:[  1/20 ] train step:1030 loss: 1.46250 lr: 0.001000 batch_cost: 5.11964 sec, reader_cost: 0.00189 sec, ips: 1.56261 instance/sec.[01/20 00:21:34] epoch:[  1/20 ] train step:1040 loss: 0.73342 lr: 0.001000 batch_cost: 5.12073 sec, reader_cost: 0.00030 sec, ips: 1.56228 instance/sec.[01/20 00:22:25] epoch:[  1/20 ] train step:1050 loss: 1.02558 lr: 0.001000 batch_cost: 5.12101 sec, reader_cost: 0.00032 sec, ips: 1.56219 instance/sec.[01/20 00:23:16] epoch:[  1/20 ] train step:1060 loss: 0.78016 lr: 0.001000 batch_cost: 5.12216 sec, reader_cost: 0.00194 sec, ips: 1.56184 instance/sec.[01/20 00:24:07] epoch:[  1/20 ] train step:1070 loss: 0.79287 lr: 0.001000 batch_cost: 5.12161 sec, reader_cost: 0.00034 sec, ips: 1.56201 instance/sec.[01/20 00:24:59] epoch:[  1/20 ] train step:1080 loss: 1.06030 lr: 0.001000 batch_cost: 5.12088 sec, reader_cost: 0.00036 sec, ips: 1.56223 instance/sec.[01/20 00:25:50] epoch:[  1/20 ] train step:1090 loss: 0.71519 lr: 0.001000 batch_cost: 5.12197 sec, reader_cost: 0.00032 sec, ips: 1.56190 instance/sec.[01/20 00:26:41] epoch:[  1/20 ] train step:1100 loss: 0.84400 lr: 0.001000 batch_cost: 5.12141 sec, reader_cost: 0.00029 sec, ips: 1.56207 instance/sec.[01/20 00:27:32] epoch:[  1/20 ] train step:1110 loss: 0.81888 lr: 0.001000 batch_cost: 5.12324 sec, reader_cost: 0.00206 sec, ips: 1.56151 instance/sec.[01/20 00:28:23] epoch:[  1/20 ] train step:1120 loss: 0.74856 lr: 0.001000 batch_cost: 5.11831 sec, reader_cost: 0.00028 sec, ips: 1.56302 instance/sec.[01/20 00:29:15] epoch:[  1/20 ] train step:1130 loss: 1.12142 lr: 0.001000 batch_cost: 5.11805 sec, reader_cost: 0.00052 sec, ips: 1.56309 instance/sec.[01/20 00:30:06] epoch:[  1/20 ] train step:1140 loss: 0.98822 lr: 0.001000 batch_cost: 5.11818 sec, reader_cost: 0.00028 sec, ips: 1.56306 instance/sec.[01/20 00:30:57] epoch:[  1/20 ] train step:1150 loss: 0.98214 lr: 0.001000 batch_cost: 5.11815 sec, reader_cost: 0.00193 sec, ips: 1.56307 instance/sec.[01/20 00:31:48] epoch:[  1/20 ] train step:1160 loss: 0.76155 lr: 0.001000 batch_cost: 5.11800 sec, reader_cost: 0.00197 sec, ips: 1.56311 instance/sec.[01/20 00:32:39] epoch:[  1/20 ] train step:1170 loss: 1.63553 lr: 0.001000 batch_cost: 5.11948 sec, reader_cost: 0.00213 sec, ips: 1.56266 instance/sec.[01/20 00:33:31] epoch:[  1/20 ] train step:1180 loss: 0.97023 lr: 0.001000 batch_cost: 5.11863 sec, reader_cost: 0.00181 sec, ips: 1.56292 instance/sec.[01/20 00:34:22] epoch:[  1/20 ] train step:1190 loss: 0.76626 lr: 0.001000 batch_cost: 5.11549 sec, reader_cost: 0.00026 sec, ips: 1.56388 instance/sec.[01/20 00:35:13] epoch:[  1/20 ] train step:1200 loss: 0.92260 lr: 0.001000 batch_cost: 5.11727 sec, reader_cost: 0.00207 sec, ips: 1.56333 instance/sec.[01/20 00:36:55] epoch:[  1/20 ] train step:1220 loss: 3.47221 lr: 0.001000 batch_cost: 5.11683 sec, reader_cost: 0.00027 sec, ips: 1.56347 instance/sec.[01/20 00:38:38] epoch:[  1/20 ] train step:1240 loss: 0.81679 lr: 0.001000 batch_cost: 5.11882 sec, reader_cost: 0.00029 sec, ips: 1.56286 instance/sec.[01/20 00:39:29] epoch:[  1/20 ] train step:1250 loss: 0.68164 lr: 0.001000 batch_cost: 5.13897 sec, reader_cost: 0.00063 sec, ips: 1.55673 instance/sec.[01/20 00:40:20] epoch:[  1/20 ] train step:1260 loss: 0.99880 lr: 0.001000 batch_cost: 5.13870 sec, reader_cost: 0.00655 sec, ips: 1.55682 instance/sec.[01/20 00:41:12] epoch:[  1/20 ] train step:1270 loss: 0.66626 lr: 0.001000 batch_cost: 5.13431 sec, reader_cost: 0.00082 sec, ips: 1.55814 instance/sec.[01/20 00:42:03] epoch:[  1/20 ] train step:1280 loss: 1.01429 lr: 0.001000 batch_cost: 5.13442 sec, reader_cost: 0.00556 sec, ips: 1.55811 instance/sec.[01/20 00:42:54] epoch:[  1/20 ] train step:1290 loss: 0.91251 lr: 0.001000 batch_cost: 5.13642 sec, reader_cost: 0.00523 sec, ips: 1.55751 instance/sec.[01/20 00:43:46] epoch:[  1/20 ] train step:1300 loss: 0.60144 lr: 0.001000 batch_cost: 5.13305 sec, reader_cost: 0.00394 sec, ips: 1.55853 instance/sec.[01/20 00:44:37] epoch:[  1/20 ] train step:1310 loss: 0.81367 lr: 0.001000 batch_cost: 5.13504 sec, reader_cost: 0.00366 sec, ips: 1.55792 instance/sec.[01/20 00:45:28] epoch:[  1/20 ] train step:1320 loss: 0.95971 lr: 0.001000 batch_cost: 5.13349 sec, reader_cost: 0.00337 sec, ips: 1.55839 instance/sec.[01/20 00:46:20] epoch:[  1/20 ] train step:1330 loss: 1.30315 lr: 0.001000 batch_cost: 5.13476 sec, reader_cost: 0.00435 sec, ips: 1.55801 instance/sec.[01/20 00:47:11] epoch:[  1/20 ] train step:1340 loss: 0.81557 lr: 0.001000 batch_cost: 5.13523 sec, reader_cost: 0.00072 sec, ips: 1.55786 instance/sec.[01/20 00:48:03] epoch:[  1/20 ] train step:1350 loss: 0.76051 lr: 0.001000 batch_cost: 5.13718 sec, reader_cost: 0.00394 sec, ips: 1.55727 instance/sec.[01/20 00:48:54] epoch:[  1/20 ] train step:1360 loss: 1.31903 lr: 0.001000 batch_cost: 5.13443 sec, reader_cost: 0.00391 sec, ips: 1.55811 instance/sec.[01/20 00:49:45] epoch:[  1/20 ] train step:1370 loss: 0.69258 lr: 0.001000 batch_cost: 5.13025 sec, reader_cost: 0.00311 sec, ips: 1.55938 instance/sec.[01/20 00:50:37] epoch:[  1/20 ] train step:1380 loss: 0.75761 lr: 0.001000 batch_cost: 5.13413 sec, reader_cost: 0.00351 sec, ips: 1.55820 instance/sec.[01/20 00:51:28] epoch:[  1/20 ] train step:1390 loss: 1.17371 lr: 0.001000 batch_cost: 5.13361 sec, reader_cost: 0.00357 sec, ips: 1.55836 instance/sec.[01/20 00:52:19] epoch:[  1/20 ] train step:1400 loss: 0.78944 lr: 0.001000 batch_cost: 5.13816 sec, reader_cost: 0.00354 sec, ips: 1.55698 instance/sec.[01/20 00:53:11] epoch:[  1/20 ] train step:1410 loss: 0.84147 lr: 0.001000 batch_cost: 5.13372 sec, reader_cost: 0.00390 sec, ips: 1.55832 instance/sec.[01/20 00:54:02] epoch:[  1/20 ] train step:1420 loss: 0.86841 lr: 0.001000 batch_cost: 5.13592 sec, reader_cost: 0.00394 sec, ips: 1.55766 instance/sec.[01/20 00:54:53] epoch:[  1/20 ] train step:1430 loss: 0.97052 lr: 0.001000 batch_cost: 5.13534 sec, reader_cost: 0.00077 sec, ips: 1.55783 instance/sec.[01/20 00:55:45] epoch:[  1/20 ] train step:1440 loss: 0.77323 lr: 0.001000 batch_cost: 5.13320 sec, reader_cost: 0.00074 sec, ips: 1.55848 instance/sec.[01/20 00:56:36] epoch:[  1/20 ] train step:1450 loss: 0.68236 lr: 0.001000 batch_cost: 5.13404 sec, reader_cost: 0.00319 sec, ips: 1.55823 instance/sec.[01/20 00:57:27] epoch:[  1/20 ] train step:1460 loss: 1.12847 lr: 0.001000 batch_cost: 5.11844 sec, reader_cost: 0.00027 sec, ips: 1.56298 instance/sec.[01/20 00:58:19] epoch:[  1/20 ] train step:1470 loss: 0.99475 lr: 0.001000 batch_cost: 5.12209 sec, reader_cost: 0.00040 sec, ips: 1.56186 instance/sec.[01/20 00:59:10] epoch:[  1/20 ] train step:1480 loss: 0.71617 lr: 0.001000 batch_cost: 5.11785 sec, reader_cost: 0.00188 sec, ips: 1.56316 instance/sec.[01/20 01:00:01] epoch:[  1/20 ] train step:1490 loss: 0.63718 lr: 0.001000 batch_cost: 5.11875 sec, reader_cost: 0.00167 sec, ips: 1.56288 instance/sec.^C[01/20 01:00:24] main proc 8631 exit, kill process group 8439
[01/20 01:00:24] main proc 8633 exit, kill process group 8439
[01/20 01:00:24] main proc 8637 exit, kill process group 8439
[01/20 01:00:24] main proc 8635 exit, kill process group 8439
[01/20 01:00:24] main proc 8630 exit, kill process group 8439
[01/20 01:00:24] main proc 8632 exit, kill process group 8439
[01/20 01:00:24] main proc 8634 exit, kill process group 8439
[01/20 01:00:24] main proc 8636 exit, kill process group 8439

训练完毕后,参数模型可以在 /home/aistudio/work/PaddleVideo/output/BMN 中找到。


BMN预测推理


完成训练后,将训练好的BMN模型转为预测模式:

注:大家可以选择效果最佳的模型进行输出,我们也提供训练好的模型供大家下载测试,下载后直接替换-p后面的模型即可

# -c 指定配置文件,-p 指定训练好的模型参数, -o 指定模型输出目录 
!python3.7 tools/export_model.py -c /home/aistudio/work/PaddleVideo/applications/TableTennis/configs/bmn_tabletennis.yaml -p /home/aistudio/work/PaddleVideo/output/BMN/BMN_epoch_00001.pdparams -o inference/BMN

在PaddleVideo/applications/TableTennis/目录下找到推理预测脚本:

# 跳转到脚本目录下
%cd /home/aistudio/work/PaddleVideo/applications/TableTennis/extractor/

修改脚本extract_bmn_for_tabletennis.py中输入及输出路径:

###
sys.path.append(
    "/home/aistudio/work/PaddleVideo/TableTennis/predict/action_detect/"
)
###
###
    dataset_dir = '/home/aistudio/data/Features_test/'
    output_dir = '/home/aistudio/data'
###

修改推理配置文件configs/configs.yaml中导出的预测模型路径:

###
BMN:
    name: "BMN"
    model_file: "/home/aistudio/work/PaddleVideo/inference/BMN/BMN.pdmodel"
    params_file: "/home/aistudio/work/PaddleVideo/inference/BMN/BMN.pdiparams"
###
# 运行推理预测脚本
!python3.7 extract_bmn_for_tabletennis.py

将推理结果转换为比赛指定提交格式:

# 跳转至脚本目录下
%cd /home/aistudio/work/PaddleVideo/applications/TableTennis/datasets/script/

修改脚本submission_format_transfer.py中输入及输出路径:

###
with open('/home/aistudio/data/Output_for_bmn/prop.json') as f:
    data = json.load(f)
###
###
jsonFile = open('/home/aistudio/data/Output_for_bmn/submission.json', 'w')
###
# 运行脚本
!python3.7 submission_format_transfer.py


提交结果


格式转换脚本运行结束后,可在 /home/aistudio/data/Output_for_bmn/ 目录下找到可直接提交的结果文件 submission.json,压缩为 submission.zip 并提交至测评官网,即可以查看在A榜得分。大家可以调整超参,改变网络结构和优化策略来提升精度。


模型优化


通过超参调节,BMN模型可以获得更高的精度。同时,也希望大家借助这个项目探索新动作定位方法和优化策略~ 更多精彩内容,可以关注我们的官网页面,怕走丢的小伙伴可以马上扫码,或点击PaddleVideo,Star⭐收藏一下~


目录
相关文章
|
2月前
|
人工智能
姿态识别+康复训练矫正+代码+部署(AI 健身教练来分析深蹲等姿态)-2
姿态识别+康复训练矫正+代码+部署(AI 健身教练来分析深蹲等姿态)-2
149 2
|
2月前
|
机器学习/深度学习 人工智能 算法
姿态识别+康复训练矫正+代码+部署(AI 健身教练来分析深蹲等姿态)-1
姿态识别+康复训练矫正+代码+部署(AI 健身教练来分析深蹲等姿态)-1
192 1
|
机器学习/深度学习 Web App开发 人工智能
Deepmotion: AI动作捕捉和3D身体追踪技术平台
Deepmotion: AI动作捕捉和3D身体追踪技术平台
570 0
|
传感器 人工智能 监控
卡塔尔世界杯出现了半自动越位识别技术、动作轨迹捕捉等黑科技。
卡塔尔世界杯出现了半自动越位识别技术、动作轨迹捕捉等黑科技。
卡塔尔世界杯出现了半自动越位识别技术、动作轨迹捕捉等黑科技。
|
数据挖掘 定位技术 Python
用对线阶段数据分析和预测《英雄联盟》的游戏结果
用对线阶段数据分析和预测《英雄联盟》的游戏结果
448 0
用对线阶段数据分析和预测《英雄联盟》的游戏结果
|
机器学习/深度学习 编解码 算法
智能硬件语音控制的时频图分类挑战赛2.0(思路以及结果,目前top5)
智能硬件语音控制的时频图分类挑战赛2.0(思路以及结果,目前top5)
智能硬件语音控制的时频图分类挑战赛2.0(思路以及结果,目前top5)
|
机器学习/深度学习 存储 人工智能
深度强化学习-运动规划(2)|学习笔记
快速学习深度强化学习-运动规划(2)
116 0
深度强化学习-运动规划(2)|学习笔记
|
机器学习/深度学习 移动开发 人工智能
深度强化学习-运动规划(3)|学习笔记
快速学习深度强化学习-运动规划(3)
171 0
深度强化学习-运动规划(3)|学习笔记
|
机器学习/深度学习 人工智能 算法
深度强化学习-运动规划(1)|学习笔记
快速学习深度强化学习-运动规划(1)
172 0
深度强化学习-运动规划(1)|学习笔记
|
机器学习/深度学习 人工智能 计算机视觉
阿里AI打破视觉对话识别纪录,机器看图说话能力比肩人类
近日, 在第二届视觉对话竞赛Visual Dialogue Challenge中,阿里AI击败了微软、首尔大学等十支参赛队伍,获得冠军。这是阿里巴巴达摩院城市大脑实验室联合阿里巴巴-南洋理工大学联合学院(JRI)等单位取得的又一项世界级技术突破。
1157 0