直接使用
请打开基于EasyNLP的中文新闻标题生成,并点击右上角 “ 在DSW中打开” 。
基于mT5的中文新闻标题生成(文本摘要)
文本摘要(Text Summarization)旨在从冗长、重复的文本序列中抽取、精炼或总结出其中的要点信息。T5是由谷歌提出的一个序列到序列预训练模型,它将不同的生成任务进行统一,在兼顾迁移性的前提下取得了文本生成领域的最佳性能。mT5是T5的多语言版本,该模型利用包含101种语言的语料训练得到多语言预训练模型。
在EasyNLP中,我们提供了经过训练的mT5(其它模型可见列表),以便用户能够受益于模型强大的建模能力。该模型是在mT5的基础上利用新闻数据进行微调得到。本文将以新闻标题生成任务为例,将mT5作为模型底座构建标题生成模型,展示如何利用EasyNLP进行模型构建、训练、评估、预测。
新增文本摘要模型
hfl/randeng-523M-Summary-Chinese
hfl/randeng-238M-Summary-Chinese
新增新闻标题生成模型
alibaba-pai/randeng-523M-Summary-Chinese-tuned
alibaba-pai/randeng-238M-Summary-Chinese-tuned
运行环境要求
PAI-Pytorch 1.7/1.8镜像, GPU机型 P100 or V100, 内存32G
EasyNLP安装
建议从GitHub下载EasyNLP源代码进行安装,命令如下:
! git clone https://github.com/alibaba/EasyNLP.git ! pip install -r EasyNLP/requirements.txt -i http://mirrors.aliyun.com/pypi/simple/ ! cd EasyNLP ! python setup.py install
您可以使用如下命令验证是否安装成功:
! which easynlp
/home/pai/bin/easynlp
如果您系统内已经安装完easynlp的CLI工具,则说明EasyNLP代码库已经安装。
数据准备
首先,您需要下载用于本示例的训练和测试集,并创建保存模型的文件夹,命令如下:
! wget http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/generation/cn_train.tsv ! wget http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/generation/cn_dev.tsv
--2022-08-25 10:03:21-- http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/generation/cn_train.tsv Resolving atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com (atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com)... 47.101.88.27 Connecting to atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com (atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com)|47.101.88.27|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2729152 (2.6M) [text/tab-separated-values] Saving to: ‘cn_train.tsv’ cn_train.tsv 100%[===================>] 2.60M 9.04MB/s in 0.3s 2022-08-25 10:03:21 (9.04 MB/s) - ‘cn_train.tsv’ saved [2729152/2729152] --2022-08-25 10:03:22-- http://atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com/release/tutorials/generation/cn_dev.tsv Resolving atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com (atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com)... 47.101.88.27 Connecting to atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com (atp-modelzoo-sh.oss-cn-shanghai.aliyuncs.com)|47.101.88.27|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2137595 (2.0M) [text/tab-separated-values] Saving to: ‘cn_dev.tsv’ cn_dev.tsv 100%[===================>] 2.04M 7.77MB/s in 0.3s 2022-08-25 10:03:22 (7.77 MB/s) - ‘cn_dev.tsv’ saved [2137595/2137595]
数据下载完成后,可以通过以下代码查看第一条数据。在训练集中,每一行为一条新闻数据,包括经过分词的新闻标题和新闻内容,两者通过制表符(\t)隔开。开发集中则包含原始文本、分词后的文本、新闻类别。
print('Training data sample:') ! head -n 1 cn_train.tsv print('Development set data sample:') ! head -n 1 cn_dev.tsv
Training data sample: 湖北:“四上企业”复工率已达93.8% 央视网消息:4月1日,记者从湖北省新冠肺炎疫情防控工作新闻发布会上获悉,在各方面共同努力下,湖北省复工复产工作取得了阶段性成效。截至3月31日,湖北省“四上企业”包括规模以上工业、规模以上服务业法人单位等的复工率已达93.8%,复岗率69.3%。武汉市的复工率、复岗率也分别达到了85.4%、40.4%。责任编辑:王诗尧 Development set data sample: 2018年中国小篮球联赛北京赛区总决赛落幕 新华社北京6月18日电 2018年中国小篮球联赛北京赛区总决赛18日在李宁中心篮球馆落下帷幕,经过近两个月的激烈争夺,U8混合组、U10混合组、U10男子组、U12男子组以及U12女子组等5个组别在当天决出了冠亚季军。翠微小学队夺得了U10男子组和U12男子组两个冠军,和一闪电队摘得U10混合组和U12女子组两个桂冠,少年宫小牛队问鼎U8混合组。中国篮协于2017年底推出了以“小篮球 大梦想”为主题的小篮球联赛,并得到了全国各地的积极响应。据中国篮协提供的数据,截至2018年5月底,全国已有31个省区市在当地开展了小篮球联赛,举办小篮球层级联赛的城市有192个、赛区334个,成功报名参赛的球队有15042支,成功报名的运动员达98780人。“小篮球”是一项使用小型篮球的儿童体育活动,篮球、篮筐、场地、规则均按照儿童身体发育的特点而设计。在2017年,中国篮协推出了《小篮球规则》,宗旨是从激发孩子的兴趣入手,改变成人比赛的规则以适合孩子们的身心发展规律,秉持着删繁就简、通俗易懂和遵循篮球发展规律的原则,使用对象为12岁及以下的男孩和女孩,并按照我国小学六年学制划分为U12、U10、U8、U6等年龄组。(完) 2018 年 中国 小 篮球联赛 北京 赛区 总决赛 落幕 新华社 北京 6 月 18 日电 2018 年 中国 小 篮球联赛 北京 赛区 总决赛 18 日 在 李宁 中心 篮球馆 落下 帷幕 , 经过 近 两个 月 的 激烈 争夺 , U8 混合 组 、 U10 混合 组 、 U10 男子组 、 U12 男子组 以及 U12 女子组 等 5 个 组别 在 当天 决出 了 冠亚 季军 。 翠微 小学 队 夺得 了 U10 男子组 和 U12 男子组 两个 冠军 , 和 一 闪电 队 摘得 U10 混合 组和 U12 女子组 两个 桂冠 , 少年宫 小牛队 问鼎 U8 混合 组 。 中国篮协 于 2017 年底 推出 了 以 “ 小 篮球 大 梦想 ” 为 主题 的 小 篮球联赛 , 并 得到 了 全国 各地 的 积极响应 。 据 中国篮协 提供 的 数据 , 截至 2018 年 5 月底 , 全国 已有 31 个 省区市 在 当地 开展 了 小 篮球联赛 , 举办 小 篮球 层级 联赛 的 城市 有 192 个 、 赛区 334 个 , 成功 报名 参赛 的 球队 有 15042 支 , 成功 报名 的 运动员 达 98780 人 。 “ 小 篮球 ” 是 一项 使用 小型 篮球 的 儿童 体育 活动 , 篮球 、 篮筐 、 场地 、 规则 均 按照 儿童 身体 发育 的 特点 而 设计 。 在 2017 年 , 中国篮协 推出 了 《 小 篮球 规则 》 , 宗旨 是从 激发 孩子 的 兴趣 入手 , 改变 成人 比赛 的 规则 以 适合 孩子 们 的 身心 发展 规律 , 秉持着 删繁就简 、 通俗易懂 和 遵循 篮球 发展 规律 的 原则 , 使用 对象 为 12 岁 及 以下 的 男孩 和 女孩 , 并 按照 我国 小学 六年 学制 划分 为 U12 、 U10 、 U8 、 U6 等 年龄组 。 ( 完 ) 体育
初始化
在Python 3.6环境下,我们首先从刚刚安装好的EasyNLP中引入模型运行需要的各种库,并做一些初始化。在本教程中,我们使用mt5-title-generation-zh作为预训练模型底座。
# 为了避免EasyNLP中的args与Jupyter系统的冲突,需要手动设置,否则无法进行初始化。 # 在命令行或py文件中运行文中代码则可忽略下述代码。 import sys sys.argv = ['main.py']
import imp import sys import os import torch.cuda sys.path.append('./') from easynlp.core import Trainer from easynlp.appzoo.sequence_generation.data import SequenceGenerationDataset from easynlp.appzoo.sequence_generation.model import SequenceGeneration from easynlp.appzoo.sequence_generation.evaluator import SequenceGenerationEvaluator from easynlp.appzoo.sequence_generation.predictor import SequenceGenerationPredictor from easynlp.appzoo import get_application_model_for_evaluation from easynlp.utils import initialize_easynlp, get_args from easynlp.utils.global_vars import parse_user_defined_parameters from easynlp.core import PredictorManager from easynlp.utils import get_pretrain_model_path initialize_easynlp() args = get_args() user_defined_parameters='pretrain_model_name_or_path=alibaba-pai/mt5-title-generation-zh copy=false max_encoder_length=512 min_decoder_length=12 max_decoder_length=32 no_repeat_ngram_size=2 num_beams=5 num_return_sequences=5' user_defined_parameters = parse_user_defined_parameters(user_defined_parameters) args.checkpoint_dir = "./finetuned_zh_model"
[2022-08-25 10:03:32,310.310 dsw34730-66c85d4cdb-6v2c6:74007 INFO utils.py:30] NOTICE: PAIDEBUGGER is turned off. /home/pai/lib/python3.6/site-packages/OpenSSL/crypto.py:12: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release. from cryptography import x509 Please ignore the following import error if you are using tunnel table io. No module named '_common_io'
No module named 'easy_predict' ------------------------ arguments ------------------------ app_name ........................................ text_classify append_cols ..................................... None buckets ......................................... None checkpoint_dir .................................. None chief_hosts ..................................... data_threads .................................... 10 distributed_backend ............................. nccl do_lower_case ................................... False epoch_num ....................................... 3.0 export_tf_checkpoint_type ....................... easytransfer first_sequence .................................. None gradient_accumulation_steps ..................... 1 input_schema .................................... None is_chief ........................................ is_master_node .................................. True job_name ........................................ None label_enumerate_values .......................... None label_name ...................................... None learning_rate ................................... 5e-05 local_rank ...................................... None logging_steps ................................... 100 master_port ..................................... 23456 max_grad_norm ................................... 1.0 micro_batch_size ................................ 2 mode ............................................ train modelzoo_base_dir ............................... n_cpu ........................................... 1 n_gpu ........................................... 1 odps_config ..................................... None optimizer_type .................................. AdamW output_schema ................................... outputs ......................................... None predict_queue_size .............................. 1024 predict_slice_size .............................. 4096 predict_table_read_thread_num ................... 16 predict_thread_num .............................. 2 ps_hosts ........................................ random_seed ..................................... 1234 rank ............................................ 0 read_odps ....................................... False restore_works_dir ............................... ./.easynlp_predict_restore_works_dir resume_from_checkpoint .......................... None save_all_checkpoints ............................ False save_checkpoint_steps ........................... None second_sequence ................................. None sequence_length ................................. 16 skip_first_line ................................. False tables .......................................... None task_count ...................................... 1 task_index ...................................... 0 use_amp ......................................... False use_torchacc .................................... False user_defined_parameters ......................... None user_entry_file ................................. None user_script ..................................... None warmup_proportion ............................... 0.1 weight_decay .................................... 0.0001 worker_count .................................... 1 worker_cpu ...................................... -1 worker_gpu ...................................... -1 worker_hosts .................................... None world_size ...................................... 1 -------------------- end of arguments --------------------- > initializing torch distributed ...
[2022-08-25 10:03:34,252.252 dsw34730-66c85d4cdb-6v2c6:74007 INFO distributed_c10d.py:195] Added key: store_based_barrier_key:1 to store for rank: 0
Init dist done. World size: 1, rank 0, l_rank 0 > setting random seeds to 1234 ...
注意:上述代码如果出现“Address already in use”错误,则需要运行以下代码清理端口(默认为6000)上正在执行的程序。
netstat -tunlp|grep 6000
kill -9 PID (需要替换成上一行代码执行结果中对应的程序ID)
载入数据
我们使用EasyNLP中自带的SequenceGenerationDataset,对训练和测试数据进行载入。主要参数如下:
- pretrained_model_name_or_path:预训练模型名称路径,这里我们使用封装好的get_pretrain_model_path函数,来处理模型名称”mt5-title-generation-zh”,并自动下载模型
- max_seq_length:文本最大长度,超过将截断,不足将padding
- input_schema:输入数据的格式,逗号分隔的每一项对应数据文件中每行以\t分隔的一项,每项开头为其字段标识,如label、sent1等
- first_sequence、label_name:用于说明input_schema中哪些字段用于作为输入句子和标签列等
- label_enumerate_values:label类型列举
- is_training:是否为训练过程,train_dataset为True,valid_dataset为False
- app_name:指定当前需要执行的任务,如文本分类、序列标注、文本匹配、文本生成等
下面我们将手动设置一些参数以便进行实验。
args.tables = "./cn_train.tsv,./cn_dev.tsv" args.input_schema = "title_tokens:str:1,content_tokens:str:1" args.first_sequence = "content_tokens" args.second_sequence = "title_tokens" args.label_name = "title_tokens" args.learning_rate = 3e-5 args.epoch_num = 1 args.save_checkpoint_steps = 150 args.sequence_length = 512 args.micro_batch_size = 8 args.export_tf_checkpoint_type = "none" args.app_name = "sequence_generation" args.pretrained_model_name_or_path = user_defined_parameters.get('pretrain_model_name_or_path', None) args.pretrained_model_name_or_path = get_pretrain_model_path(args.pretrained_model_name_or_path) train_dataset = SequenceGenerationDataset( pretrained_model_name_or_path=args.pretrained_model_name_or_path, data_file=args.tables.split(",")[0], max_seq_length=args.sequence_length, input_schema=args.input_schema, first_sequence=args.first_sequence, second_sequence=args.second_sequence, user_defined_parameters=user_defined_parameters, is_training=True) valid_dataset = SequenceGenerationDataset( pretrained_model_name_or_path=args.pretrained_model_name_or_path, data_file=args.tables.split(",")[-1], max_seq_length=args.sequence_length, input_schema=args.input_schema, first_sequence=args.first_sequence, second_sequence=args.second_sequence, user_defined_parameters=user_defined_parameters, is_training=False)
Trying downloading name_mapping.json Success `/root/.easynlp/modelzoo/alibaba-pai/mt5-title-generation-zh.tgz` already exists
模型训练
处理好数据与模型载入后,我们开始训练模型。 我们使用EasyNLP中封装好的SequenceGeneration函数进行训练时的模型构建,其参数如下:
- pretrained_model_name_or_path:预训练模型名称路径,这里我们使用封装好的get_pretrain_model_path函数,来处理模型名称”mt5-title-generation-zh”,并自动下载模型
- user_defined_parameters:用户自定义参数,直接填入刚刚处理好的自定义参数user_defined_parameters
构建模型并读取
model = SequenceGeneration(pretrained_model_name_or_path=args.pretrained_model_name_or_path, user_defined_parameters=user_defined_parameters)
**language** parameter is not provided in user defined parameters, using zh as default.
Loaded weights of the model: [shared.weight,encoder.embed_tokens.weight,encoder.block.0.layer.0.SelfAttention.q.weight,encoder.block.0.layer.0.SelfAttention.k.weight,encoder.block.0.layer.0.SelfAttention.v.weight,encoder.block.0.layer.0.SelfAttention.o.weight,encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight,encoder.block.0.layer.0.layer_norm.weight,encoder.block.0.layer.1.DenseReluDense.wi_0.weight,encoder.block.0.layer.1.DenseReluDense.wi_1.weight,encoder.block.0.layer.1.DenseReluDense.wo.weight,encoder.block.0.layer.1.layer_norm.weight,encoder.block.1.layer.0.SelfAttention.q.weight,encoder.block.1.layer.0.SelfAttention.k.weight,encoder.block.1.layer.0.SelfAttention.v.weight,encoder.block.1.layer.0.SelfAttention.o.weight,encoder.block.1.layer.0.layer_norm.weight,encoder.block.1.layer.1.DenseReluDense.wi_0.weight,encoder.block.1.layer.1.DenseReluDense.wi_1.weight,encoder.block.1.layer.1.DenseReluDense.wo.weight,encoder.block.1.layer.1.layer_norm.weight,encoder.block.2.layer.0.SelfAttention.q.weight,encoder.block.2.layer.0.SelfAttention.k.weight,encoder.block.2.layer.0.SelfAttention.v.weight,encoder.block.2.layer.0.SelfAttention.o.weight,encoder.block.2.layer.0.layer_norm.weight,encoder.block.2.layer.1.DenseReluDense.wi_0.weight,encoder.block.2.layer.1.DenseReluDense.wi_1.weight,encoder.block.2.layer.1.DenseReluDense.wo.weight,encoder.block.2.layer.1.layer_norm.weight,encoder.block.3.layer.0.SelfAttention.q.weight,encoder.block.3.layer.0.SelfAttention.k.weight,encoder.block.3.layer.0.SelfAttention.v.weight,encoder.block.3.layer.0.SelfAttention.o.weight,encoder.block.3.layer.0.layer_norm.weight,encoder.block.3.layer.1.DenseReluDense.wi_0.weight,encoder.block.3.layer.1.DenseReluDense.wi_1.weight,encoder.block.3.layer.1.DenseReluDense.wo.weight,encoder.block.3.layer.1.layer_norm.weight,encoder.block.4.layer.0.SelfAttention.q.weight,encoder.block.4.layer.0.SelfAttention.k.weight,encoder.block.4.layer.0.SelfAttention.v.weight,encoder.block.4.layer.0.SelfAttention.o.weight,encoder.block.4.layer.0.layer_norm.weight,encoder.block.4.layer.1.DenseReluDense.wi_0.weight,encoder.block.4.layer.1.DenseReluDense.wi_1.weight,encoder.block.4.layer.1.DenseReluDense.wo.weight,encoder.block.4.layer.1.layer_norm.weight,encoder.block.5.layer.0.SelfAttention.q.weight,encoder.block.5.layer.0.SelfAttention.k.weight,encoder.block.5.layer.0.SelfAttention.v.weight,encoder.block.5.layer.0.SelfAttention.o.weight,encoder.block.5.layer.0.layer_norm.weight,encoder.block.5.layer.1.DenseReluDense.wi_0.weight,encoder.block.5.layer.1.DenseReluDense.wi_1.weight,encoder.block.5.layer.1.DenseReluDense.wo.weight,encoder.block.5.layer.1.layer_norm.weight,encoder.block.6.layer.0.SelfAttention.q.weight,encoder.block.6.layer.0.SelfAttention.k.weight,encoder.block.6.layer.0.SelfAttention.v.weight,encoder.block.6.layer.0.SelfAttention.o.weight,encoder.block.6.layer.0.layer_norm.weight,encoder.block.6.layer.1.DenseReluDense.wi_0.weight,encoder.block.6.layer.1.DenseReluDense.wi_1.weight,encoder.block.6.layer.1.DenseReluDense.wo.weight,encoder.block.6.layer.1.layer_norm.weight,encoder.block.7.layer.0.SelfAttention.q.weight,encoder.block.7.layer.0.SelfAttention.k.weight,encoder.block.7.layer.0.SelfAttention.v.weight,encoder.block.7.layer.0.SelfAttention.o.weight,encoder.block.7.layer.0.layer_norm.weight,encoder.block.7.layer.1.DenseReluDense.wi_0.weight,encoder.block.7.layer.1.DenseReluDense.wi_1.weight,encoder.block.7.layer.1.DenseReluDense.wo.weight,encoder.block.7.layer.1.layer_norm.weight,encoder.block.8.layer.0.SelfAttention.q.weight,encoder.block.8.layer.0.SelfAttention.k.weight,encoder.block.8.layer.0.SelfAttention.v.weight,encoder.block.8.layer.0.SelfAttention.o.weight,encoder.block.8.layer.0.layer_norm.weight,encoder.block.8.layer.1.DenseReluDense.wi_0.weight,encoder.block.8.layer.1.DenseReluDense.wi_1.weight,encoder.block.8.layer.1.DenseReluDense.wo.weight,encoder.block.8.layer.1.layer_norm.weight,encoder.block.9.layer.0.SelfAttention.q.weight,encoder.block.9.layer.0.SelfAttention.k.weight,encoder.block.9.layer.0.SelfAttention.v.weight,encoder.block.9.layer.0.SelfAttention.o.weight,encoder.block.9.layer.0.layer_norm.weight,encoder.block.9.layer.1.DenseReluDense.wi_0.weight,encoder.block.9.layer.1.DenseReluDense.wi_1.weight,encoder.block.9.layer.1.DenseReluDense.wo.weight,encoder.block.9.layer.1.layer_norm.weight,encoder.block.10.layer.0.SelfAttention.q.weight,encoder.block.10.layer.0.SelfAttention.k.weight,encoder.block.10.layer.0.SelfAttention.v.weight,encoder.block.10.layer.0.SelfAttention.o.weight,encoder.block.10.layer.0.layer_norm.weight,encoder.block.10.layer.1.DenseReluDense.wi_0.weight,encoder.block.10.layer.1.DenseReluDense.wi_1.weight,encoder.block.10.layer.1.DenseReluDense.wo.weight,encoder.block.10.layer.1.layer_norm.weight,encoder.block.11.layer.0.SelfAttention.q.weight,encoder.block.11.layer.0.SelfAttention.k.weight,encoder.block.11.layer.0.SelfAttention.v.weight,encoder.block.11.layer.0.SelfAttention.o.weight,encoder.block.11.layer.0.layer_norm.weight,encoder.block.11.layer.1.DenseReluDense.wi_0.weight,encoder.block.11.layer.1.DenseReluDense.wi_1.weight,encoder.block.11.layer.1.DenseReluDense.wo.weight,encoder.block.11.layer.1.layer_norm.weight,encoder.final_layer_norm.weight,decoder.embed_tokens.weight,decoder.block.0.layer.0.SelfAttention.q.weight,decoder.block.0.layer.0.SelfAttention.k.weight,decoder.block.0.layer.0.SelfAttention.v.weight,decoder.block.0.layer.0.SelfAttention.o.weight,decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight,decoder.block.0.layer.0.layer_norm.weight,decoder.block.0.layer.1.EncDecAttention.q.weight,decoder.block.0.layer.1.EncDecAttention.k.weight,decoder.block.0.layer.1.EncDecAttention.v.weight,decoder.block.0.layer.1.EncDecAttention.o.weight,decoder.block.0.layer.1.layer_norm.weight,decoder.block.0.layer.2.DenseReluDense.wi_0.weight,decoder.block.0.layer.2.DenseReluDense.wi_1.weight,decoder.block.0.layer.2.DenseReluDense.wo.weight,decoder.block.0.layer.2.layer_norm.weight,decoder.block.1.layer.0.SelfAttention.q.weight,decoder.block.1.layer.0.SelfAttention.k.weight,decoder.block.1.layer.0.SelfAttention.v.weight,decoder.block.1.layer.0.SelfAttention.o.weight,decoder.block.1.layer.0.layer_norm.weight,decoder.block.1.layer.1.EncDecAttention.q.weight,decoder.block.1.layer.1.EncDecAttention.k.weight,decoder.block.1.layer.1.EncDecAttention.v.weight,decoder.block.1.layer.1.EncDecAttention.o.weight,decoder.block.1.layer.1.layer_norm.weight,decoder.block.1.layer.2.DenseReluDense.wi_0.weight,decoder.block.1.layer.2.DenseReluDense.wi_1.weight,decoder.block.1.layer.2.DenseReluDense.wo.weight,decoder.block.1.layer.2.layer_norm.weight,decoder.block.2.layer.0.SelfAttention.q.weight,decoder.block.2.layer.0.SelfAttention.k.weight,decoder.block.2.layer.0.SelfAttention.v.weight,decoder.block.2.layer.0.SelfAttention.o.weight,decoder.block.2.layer.0.layer_norm.weight,decoder.block.2.layer.1.EncDecAttention.q.weight,decoder.block.2.layer.1.EncDecAttention.k.weight,decoder.block.2.layer.1.EncDecAttention.v.weight,decoder.block.2.layer.1.EncDecAttention.o.weight,decoder.block.2.layer.1.layer_norm.weight,decoder.block.2.layer.2.DenseReluDense.wi_0.weight,decoder.block.2.layer.2.DenseReluDense.wi_1.weight,decoder.block.2.layer.2.DenseReluDense.wo.weight,decoder.block.2.layer.2.layer_norm.weight,decoder.block.3.layer.0.SelfAttention.q.weight,decoder.block.3.layer.0.SelfAttention.k.weight,decoder.block.3.layer.0.SelfAttention.v.weight,decoder.block.3.layer.0.SelfAttention.o.weight,decoder.block.3.layer.0.layer_norm.weight,decoder.block.3.layer.1.EncDecAttention.q.weight,decoder.block.3.layer.1.EncDecAttention.k.weight,decoder.block.3.layer.1.EncDecAttention.v.weight,decoder.block.3.layer.1.EncDecAttention.o.weight,decoder.block.3.layer.1.layer_norm.weight,decoder.block.3.layer.2.DenseReluDense.wi_0.weight,decoder.block.3.layer.2.DenseReluDense.wi_1.weight,decoder.block.3.layer.2.DenseReluDense.wo.weight,decoder.block.3.layer.2.layer_norm.weight,decoder.block.4.layer.0.SelfAttention.q.weight,decoder.block.4.layer.0.SelfAttention.k.weight,decoder.block.4.layer.0.SelfAttention.v.weight,decoder.block.4.layer.0.SelfAttention.o.weight,decoder.block.4.layer.0.layer_norm.weight,decoder.block.4.layer.1.EncDecAttention.q.weight,decoder.block.4.layer.1.EncDecAttention.k.weight,decoder.block.4.layer.1.EncDecAttention.v.weight,decoder.block.4.layer.1.EncDecAttention.o.weight,decoder.block.4.layer.1.layer_norm.weight,decoder.block.4.layer.2.DenseReluDense.wi_0.weight,decoder.block.4.layer.2.DenseReluDense.wi_1.weight,decoder.block.4.layer.2.DenseReluDense.wo.weight,decoder.block.4.layer.2.layer_norm.weight,decoder.block.5.layer.0.SelfAttention.q.weight,decoder.block.5.layer.0.SelfAttention.k.weight,decoder.block.5.layer.0.SelfAttention.v.weight,decoder.block.5.layer.0.SelfAttention.o.weight,decoder.block.5.layer.0.layer_norm.weight,decoder.block.5.layer.1.EncDecAttention.q.weight,decoder.block.5.layer.1.EncDecAttention.k.weight,decoder.block.5.layer.1.EncDecAttention.v.weight,decoder.block.5.layer.1.EncDecAttention.o.weight,decoder.block.5.layer.1.layer_norm.weight,decoder.block.5.layer.2.DenseReluDense.wi_0.weight,decoder.block.5.layer.2.DenseReluDense.wi_1.weight,decoder.block.5.layer.2.DenseReluDense.wo.weight,decoder.block.5.layer.2.layer_norm.weight,decoder.block.6.layer.0.SelfAttention.q.weight,decoder.block.6.layer.0.SelfAttention.k.weight,decoder.block.6.layer.0.SelfAttention.v.weight,decoder.block.6.layer.0.SelfAttention.o.weight,decoder.block.6.layer.0.layer_norm.weight,decoder.block.6.layer.1.EncDecAttention.q.weight,decoder.block.6.layer.1.EncDecAttention.k.weight,decoder.block.6.layer.1.EncDecAttention.v.weight,decoder.block.6.layer.1.EncDecAttention.o.weight,decoder.block.6.layer.1.layer_norm.weight,decoder.block.6.layer.2.DenseReluDense.wi_0.weight,decoder.block.6.layer.2.DenseReluDense.wi_1.weight,decoder.block.6.layer.2.DenseReluDense.wo.weight,decoder.block.6.layer.2.layer_norm.weight,decoder.block.7.layer.0.SelfAttention.q.weight,decoder.block.7.layer.0.SelfAttention.k.weight,decoder.block.7.layer.0.SelfAttention.v.weight,decoder.block.7.layer.0.SelfAttention.o.weight,decoder.block.7.layer.0.layer_norm.weight,decoder.block.7.layer.1.EncDecAttention.q.weight,decoder.block.7.layer.1.EncDecAttention.k.weight,decoder.block.7.layer.1.EncDecAttention.v.weight,decoder.block.7.layer.1.EncDecAttention.o.weight,decoder.block.7.layer.1.layer_norm.weight,decoder.block.7.layer.2.DenseReluDense.wi_0.weight,decoder.block.7.layer.2.DenseReluDense.wi_1.weight,decoder.block.7.layer.2.DenseReluDense.wo.weight,decoder.block.7.layer.2.layer_norm.weight,decoder.block.8.layer.0.SelfAttention.q.weight,decoder.block.8.layer.0.SelfAttention.k.weight,decoder.block.8.layer.0.SelfAttention.v.weight,decoder.block.8.layer.0.SelfAttention.o.weight,decoder.block.8.layer.0.layer_norm.weight,decoder.block.8.layer.1.EncDecAttention.q.weight,decoder.block.8.layer.1.EncDecAttention.k.weight,decoder.block.8.layer.1.EncDecAttention.v.weight,decoder.block.8.layer.1.EncDecAttention.o.weight,decoder.block.8.layer.1.layer_norm.weight,decoder.block.8.layer.2.DenseReluDense.wi_0.weight,decoder.block.8.layer.2.DenseReluDense.wi_1.weight,decoder.block.8.layer.2.DenseReluDense.wo.weight,decoder.block.8.layer.2.layer_norm.weight,decoder.block.9.layer.0.SelfAttention.q.weight,decoder.block.9.layer.0.SelfAttention.k.weight,decoder.block.9.layer.0.SelfAttention.v.weight,decoder.block.9.layer.0.SelfAttention.o.weight,decoder.block.9.layer.0.layer_norm.weight,decoder.block.9.layer.1.EncDecAttention.q.weight,decoder.block.9.layer.1.EncDecAttention.k.weight,decoder.block.9.layer.1.EncDecAttention.v.weight,decoder.block.9.layer.1.EncDecAttention.o.weight,decoder.block.9.layer.1.layer_norm.weight,decoder.block.9.layer.2.DenseReluDense.wi_0.weight,decoder.block.9.layer.2.DenseReluDense.wi_1.weight,decoder.block.9.layer.2.DenseReluDense.wo.weight,decoder.block.9.layer.2.layer_norm.weight,decoder.block.10.layer.0.SelfAttention.q.weight,decoder.block.10.layer.0.SelfAttention.k.weight,decoder.block.10.layer.0.SelfAttention.v.weight,decoder.block.10.layer.0.SelfAttention.o.weight,decoder.block.10.layer.0.layer_norm.weight,decoder.block.10.layer.1.EncDecAttention.q.weight,decoder.block.10.layer.1.EncDecAttention.k.weight,decoder.block.10.layer.1.EncDecAttention.v.weight,decoder.block.10.layer.1.EncDecAttention.o.weight,decoder.block.10.layer.1.layer_norm.weight,decoder.block.10.layer.2.DenseReluDense.wi_0.weight,decoder.block.10.layer.2.DenseReluDense.wi_1.weight,decoder.block.10.layer.2.DenseReluDense.wo.weight,decoder.block.10.layer.2.layer_norm.weight,decoder.block.11.layer.0.SelfAttention.q.weight,decoder.block.11.layer.0.SelfAttention.k.weight,decoder.block.11.layer.0.SelfAttention.v.weight,decoder.block.11.layer.0.SelfAttention.o.weight,decoder.block.11.layer.0.layer_norm.weight,decoder.block.11.layer.1.EncDecAttention.q.weight,decoder.block.11.layer.1.EncDecAttention.k.weight,decoder.block.11.layer.1.EncDecAttention.v.weight,decoder.block.11.layer.1.EncDecAttention.o.weight,decoder.block.11.layer.1.layer_norm.weight,decoder.block.11.layer.2.DenseReluDense.wi_0.weight,decoder.block.11.layer.2.DenseReluDense.wi_1.weight,decoder.block.11.layer.2.DenseReluDense.wo.weight,decoder.block.11.layer.2.layer_norm.weight,decoder.final_layer_norm.weight,lm_head.weight]. All weights are initialized.
构建训练器并训练
extra_para = {'pretrained_model_name_or_path':args.pretrained_model_name_or_path} evaluator = SequenceGenerationEvaluator(valid_dataset=valid_dataset, user_defined_parameters=user_defined_parameters, **extra_para) trainer = Trainer(model=model, train_dataset=train_dataset, user_defined_parameters=user_defined_parameters, evaluator=evaluator) trainer.train()
[2022-08-25 10:14:27,292 INFO] ========== Initializing Tensorboard ========== [2022-08-25 10:14:27,326 INFO] ========== Training Start ========== [2022-08-25 10:14:27,327 INFO] Num of GPUs (all) = 1 [2022-08-25 10:14:27,329 INFO] Num of CPUs per worker = 1 [2022-08-25 10:14:27,329 INFO] Num dataset examples = 1000 [2022-08-25 10:14:27,330 INFO] Num training examples = 1000 [2022-08-25 10:14:27,330 INFO] Num validation examples = 500 [2022-08-25 10:14:27,331 INFO] Train. batch size = 8 [2022-08-25 10:14:27,332 INFO] Train. micro batch size = 8 [2022-08-25 10:14:27,333 INFO] Train. batch no. = 125 [2022-08-25 10:14:27,334 INFO] Evaluation batch size = 8 [2022-08-25 10:14:27,335 INFO] Total training steps = 125 [2022-08-25 10:14:27,335 INFO] Sequence length = 512 [2022-08-25 10:14:27,336 INFO] Saving steps = 150 [2022-08-25 10:14:27,337 INFO] Distributed_backend = nccl [2022-08-25 10:14:27,337 INFO] Worker Count = 1 [2022-08-25 10:14:27,338 INFO] Worker CPU = -1 [2022-08-25 10:14:27,338 INFO] Worker data threads = 10 [2022-08-25 10:14:27,342 INFO] num model params = 275,029,248 [2022-08-25 10:14:27,342 INFO] num trainable params = 275,029,248 [2022-08-25 10:14:27,343 INFO] [2022-08-25 10:14:27,346 INFO] ========== Model Config ========== [2022-08-25 10:14:27,346 INFO] { "architectures": [ "T5ForConditionalGeneration" ], "d_ff": 2048, "d_kv": 64, "d_model": 768, "decoder_start_token_id": 0, "dropout_rate": 0.1, "easynlp_version": "0.0.3", "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "layer_norm_epsilon": 1e-06, "model_type": "mt5", "num_decoder_layers": 12, "num_heads": 12, "num_layers": 12, "output_past": true, "pad_token_id": 0, "relative_attention_num_buckets": 32, "tie_word_embeddings": false, "tokenizer_class": "T5Tokenizer", "use_cache": true, "vocab_size": 50000 }
optimizer type: AdamW
/home/pai/lib/python3.6/site-packages/pai_easynlp-0.0.7-py3.6.egg/easynlp/core/optimizers.py:441: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /workspace/artifacts/paipytorch1.8/dist/ubuntu18.04-py3.6-cuda10.1/build/src/torch/csrc/utils/python_arg_parser.cpp:1005.) exp_avg.mul_(beta1).add_(1.0 - beta1, grad) /home/pai/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:247: UserWarning: To get the last learning rate computed by the scheduler, please use `get_last_lr()`. warnings.warn("To get the last learning rate computed by the scheduler, " [2022-08-25 10:15:17,786 INFO] Epoch [ 0/ 1], step [100/125], lr 0.000007, 50.44 s [2022-08-25 10:15:17,788 INFO] loss : 1.9847
Training Time: 62.316936016082764, rank 0, gsteps 125
100%|██████████| 500/500 [04:47<00:00, 1.74it/s] [2022-08-25 10:20:17,142 INFO] Saving best model to ./finetuned_zh_model/pytorch_model.bin...
Rouge 1/2/L: 60.85/47.17/57.08
[2022-08-25 10:20:39,439 INFO] Best score: 57.080514058238066 [2022-08-25 10:20:39,440 INFO] Training Time: 372.4064085483551
模型评估
训练过程结束后,模型被我们保存在一开始指定好的checkpoint_dir中,本地路径为”./finetuned_zh_model/”。我们可以对训练好的模型进行效果评估。我们使用EasyNLP中的SequenceGenerationEvaluator来初始化evaluator,并将模型迁移至GPU机器,进行模型评估。
args.tables = "cn_dev.tsv" extra_para = {'pretrained_model_name_or_path':args.pretrained_model_name_or_path} evaluator = SequenceGenerationEvaluator(valid_dataset=valid_dataset, user_defined_parameters=user_defined_parameters, **extra_para) if args.n_gpu > 0: model.to(torch.cuda.current_device()) else: model.to("cpu") evaluator.evaluate(model=model)
100%|██████████| 500/500 [04:55<00:00, 1.69it/s]
Rouge 1/2/L: 59.72/46.35/56.75
[('rouge-l', 56.74881028022306), ('rouge-1', 59.718829077366934), ('rouge-2', 46.34863665437947)]
模型预测
我们同样可以使用训练好的模型进行新闻标题生成。我们首先创建一个predictor,并据此实例化一个PredictorManager实例。我们指定预测好的结果输出在cn.preds.txt。
args.tables = "cn_dev.tsv" args.outputs = "cn.preds.txt" args.input_schema = "title:str:1,content:str:1,title_tokens:str:1,content_tokens:str:1,tag:str:1" args.output_schema = "predictions,beams" args.append_cols="title_tokens,content,tag" args.micro_batch_size = 32 predictor = SequenceGenerationPredictor(model_dir=args.checkpoint_dir, model_cls=SequenceGeneration, first_sequence=args.first_sequence, user_defined_parameters=user_defined_parameters) predictor_manager = PredictorManager( predictor=predictor, input_file=args.tables.split(",")[0], input_schema=args.input_schema, output_file=args.outputs, output_schema=args.output_schema, append_cols=args.append_cols, batch_size=args.micro_batch_size ) predictor_manager.run()
**language** parameter is not provided in user defined parameters, using zh as default.
Loaded weights of the model: [shared.weight,encoder.embed_tokens.weight,encoder.block.0.layer.0.SelfAttention.q.weight,encoder.block.0.layer.0.SelfAttention.k.weight,encoder.block.0.layer.0.SelfAttention.v.weight,encoder.block.0.layer.0.SelfAttention.o.weight,encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight,encoder.block.0.layer.0.layer_norm.weight,encoder.block.0.layer.1.DenseReluDense.wi_0.weight,encoder.block.0.layer.1.DenseReluDense.wi_1.weight,encoder.block.0.layer.1.DenseReluDense.wo.weight,encoder.block.0.layer.1.layer_norm.weight,encoder.block.1.layer.0.SelfAttention.q.weight,encoder.block.1.layer.0.SelfAttention.k.weight,encoder.block.1.layer.0.SelfAttention.v.weight,encoder.block.1.layer.0.SelfAttention.o.weight,encoder.block.1.layer.0.layer_norm.weight,encoder.block.1.layer.1.DenseReluDense.wi_0.weight,encoder.block.1.layer.1.DenseReluDense.wi_1.weight,encoder.block.1.layer.1.DenseReluDense.wo.weight,encoder.block.1.layer.1.layer_norm.weight,encoder.block.2.layer.0.SelfAttention.q.weight,encoder.block.2.layer.0.SelfAttention.k.weight,encoder.block.2.layer.0.SelfAttention.v.weight,encoder.block.2.layer.0.SelfAttention.o.weight,encoder.block.2.layer.0.layer_norm.weight,encoder.block.2.layer.1.DenseReluDense.wi_0.weight,encoder.block.2.layer.1.DenseReluDense.wi_1.weight,encoder.block.2.layer.1.DenseReluDense.wo.weight,encoder.block.2.layer.1.layer_norm.weight,encoder.block.3.layer.0.SelfAttention.q.weight,encoder.block.3.layer.0.SelfAttention.k.weight,encoder.block.3.layer.0.SelfAttention.v.weight,encoder.block.3.layer.0.SelfAttention.o.weight,encoder.block.3.layer.0.layer_norm.weight,encoder.block.3.layer.1.DenseReluDense.wi_0.weight,encoder.block.3.layer.1.DenseReluDense.wi_1.weight,encoder.block.3.layer.1.DenseReluDense.wo.weight,encoder.block.3.layer.1.layer_norm.weight,encoder.block.4.layer.0.SelfAttention.q.weight,encoder.block.4.layer.0.SelfAttention.k.weight,encoder.block.4.layer.0.SelfAttention.v.weight,encoder.block.4.layer.0.SelfAttention.o.weight,encoder.block.4.layer.0.layer_norm.weight,encoder.block.4.layer.1.DenseReluDense.wi_0.weight,encoder.block.4.layer.1.DenseReluDense.wi_1.weight,encoder.block.4.layer.1.DenseReluDense.wo.weight,encoder.block.4.layer.1.layer_norm.weight,encoder.block.5.layer.0.SelfAttention.q.weight,encoder.block.5.layer.0.SelfAttention.k.weight,encoder.block.5.layer.0.SelfAttention.v.weight,encoder.block.5.layer.0.SelfAttention.o.weight,encoder.block.5.layer.0.layer_norm.weight,encoder.block.5.layer.1.DenseReluDense.wi_0.weight,encoder.block.5.layer.1.DenseReluDense.wi_1.weight,encoder.block.5.layer.1.DenseReluDense.wo.weight,encoder.block.5.layer.1.layer_norm.weight,encoder.block.6.layer.0.SelfAttention.q.weight,encoder.block.6.layer.0.SelfAttention.k.weight,encoder.block.6.layer.0.SelfAttention.v.weight,encoder.block.6.layer.0.SelfAttention.o.weight,encoder.block.6.layer.0.layer_norm.weight,encoder.block.6.layer.1.DenseReluDense.wi_0.weight,encoder.block.6.layer.1.DenseReluDense.wi_1.weight,encoder.block.6.layer.1.DenseReluDense.wo.weight,encoder.block.6.layer.1.layer_norm.weight,encoder.block.7.layer.0.SelfAttention.q.weight,encoder.block.7.layer.0.SelfAttention.k.weight,encoder.block.7.layer.0.SelfAttention.v.weight,encoder.block.7.layer.0.SelfAttention.o.weight,encoder.block.7.layer.0.layer_norm.weight,encoder.block.7.layer.1.DenseReluDense.wi_0.weight,encoder.block.7.layer.1.DenseReluDense.wi_1.weight,encoder.block.7.layer.1.DenseReluDense.wo.weight,encoder.block.7.layer.1.layer_norm.weight,encoder.block.8.layer.0.SelfAttention.q.weight,encoder.block.8.layer.0.SelfAttention.k.weight,encoder.block.8.layer.0.SelfAttention.v.weight,encoder.block.8.layer.0.SelfAttention.o.weight,encoder.block.8.layer.0.layer_norm.weight,encoder.block.8.layer.1.DenseReluDense.wi_0.weight,encoder.block.8.layer.1.DenseReluDense.wi_1.weight,encoder.block.8.layer.1.DenseReluDense.wo.weight,encoder.block.8.layer.1.layer_norm.weight,encoder.block.9.layer.0.SelfAttention.q.weight,encoder.block.9.layer.0.SelfAttention.k.weight,encoder.block.9.layer.0.SelfAttention.v.weight,encoder.block.9.layer.0.SelfAttention.o.weight,encoder.block.9.layer.0.layer_norm.weight,encoder.block.9.layer.1.DenseReluDense.wi_0.weight,encoder.block.9.layer.1.DenseReluDense.wi_1.weight,encoder.block.9.layer.1.DenseReluDense.wo.weight,encoder.block.9.layer.1.layer_norm.weight,encoder.block.10.layer.0.SelfAttention.q.weight,encoder.block.10.layer.0.SelfAttention.k.weight,encoder.block.10.layer.0.SelfAttention.v.weight,encoder.block.10.layer.0.SelfAttention.o.weight,encoder.block.10.layer.0.layer_norm.weight,encoder.block.10.layer.1.DenseReluDense.wi_0.weight,encoder.block.10.layer.1.DenseReluDense.wi_1.weight,encoder.block.10.layer.1.DenseReluDense.wo.weight,encoder.block.10.layer.1.layer_norm.weight,encoder.block.11.layer.0.SelfAttention.q.weight,encoder.block.11.layer.0.SelfAttention.k.weight,encoder.block.11.layer.0.SelfAttention.v.weight,encoder.block.11.layer.0.SelfAttention.o.weight,encoder.block.11.layer.0.layer_norm.weight,encoder.block.11.layer.1.DenseReluDense.wi_0.weight,encoder.block.11.layer.1.DenseReluDense.wi_1.weight,encoder.block.11.layer.1.DenseReluDense.wo.weight,encoder.block.11.layer.1.layer_norm.weight,encoder.final_layer_norm.weight,decoder.embed_tokens.weight,decoder.block.0.layer.0.SelfAttention.q.weight,decoder.block.0.layer.0.SelfAttention.k.weight,decoder.block.0.layer.0.SelfAttention.v.weight,decoder.block.0.layer.0.SelfAttention.o.weight,decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight,decoder.block.0.layer.0.layer_norm.weight,decoder.block.0.layer.1.EncDecAttention.q.weight,decoder.block.0.layer.1.EncDecAttention.k.weight,decoder.block.0.layer.1.EncDecAttention.v.weight,decoder.block.0.layer.1.EncDecAttention.o.weight,decoder.block.0.layer.1.layer_norm.weight,decoder.block.0.layer.2.DenseReluDense.wi_0.weight,decoder.block.0.layer.2.DenseReluDense.wi_1.weight,decoder.block.0.layer.2.DenseReluDense.wo.weight,decoder.block.0.layer.2.layer_norm.weight,decoder.block.1.layer.0.SelfAttention.q.weight,decoder.block.1.layer.0.SelfAttention.k.weight,decoder.block.1.layer.0.SelfAttention.v.weight,decoder.block.1.layer.0.SelfAttention.o.weight,decoder.block.1.layer.0.layer_norm.weight,decoder.block.1.layer.1.EncDecAttention.q.weight,decoder.block.1.layer.1.EncDecAttention.k.weight,decoder.block.1.layer.1.EncDecAttention.v.weight,decoder.block.1.layer.1.EncDecAttention.o.weight,decoder.block.1.layer.1.layer_norm.weight,decoder.block.1.layer.2.DenseReluDense.wi_0.weight,decoder.block.1.layer.2.DenseReluDense.wi_1.weight,decoder.block.1.layer.2.DenseReluDense.wo.weight,decoder.block.1.layer.2.layer_norm.weight,decoder.block.2.layer.0.SelfAttention.q.weight,decoder.block.2.layer.0.SelfAttention.k.weight,decoder.block.2.layer.0.SelfAttention.v.weight,decoder.block.2.layer.0.SelfAttention.o.weight,decoder.block.2.layer.0.layer_norm.weight,decoder.block.2.layer.1.EncDecAttention.q.weight,decoder.block.2.layer.1.EncDecAttention.k.weight,decoder.block.2.layer.1.EncDecAttention.v.weight,decoder.block.2.layer.1.EncDecAttention.o.weight,decoder.block.2.layer.1.layer_norm.weight,decoder.block.2.layer.2.DenseReluDense.wi_0.weight,decoder.block.2.layer.2.DenseReluDense.wi_1.weight,decoder.block.2.layer.2.DenseReluDense.wo.weight,decoder.block.2.layer.2.layer_norm.weight,decoder.block.3.layer.0.SelfAttention.q.weight,decoder.block.3.layer.0.SelfAttention.k.weight,decoder.block.3.layer.0.SelfAttention.v.weight,decoder.block.3.layer.0.SelfAttention.o.weight,decoder.block.3.layer.0.layer_norm.weight,decoder.block.3.layer.1.EncDecAttention.q.weight,decoder.block.3.layer.1.EncDecAttention.k.weight,decoder.block.3.layer.1.EncDecAttention.v.weight,decoder.block.3.layer.1.EncDecAttention.o.weight,decoder.block.3.layer.1.layer_norm.weight,decoder.block.3.layer.2.DenseReluDense.wi_0.weight,decoder.block.3.layer.2.DenseReluDense.wi_1.weight,decoder.block.3.layer.2.DenseReluDense.wo.weight,decoder.block.3.layer.2.layer_norm.weight,decoder.block.4.layer.0.SelfAttention.q.weight,decoder.block.4.layer.0.SelfAttention.k.weight,decoder.block.4.layer.0.SelfAttention.v.weight,decoder.block.4.layer.0.SelfAttention.o.weight,decoder.block.4.layer.0.layer_norm.weight,decoder.block.4.layer.1.EncDecAttention.q.weight,decoder.block.4.layer.1.EncDecAttention.k.weight,decoder.block.4.layer.1.EncDecAttention.v.weight,decoder.block.4.layer.1.EncDecAttention.o.weight,decoder.block.4.layer.1.layer_norm.weight,decoder.block.4.layer.2.DenseReluDense.wi_0.weight,decoder.block.4.layer.2.DenseReluDense.wi_1.weight,decoder.block.4.layer.2.DenseReluDense.wo.weight,decoder.block.4.layer.2.layer_norm.weight,decoder.block.5.layer.0.SelfAttention.q.weight,decoder.block.5.layer.0.SelfAttention.k.weight,decoder.block.5.layer.0.SelfAttention.v.weight,decoder.block.5.layer.0.SelfAttention.o.weight,decoder.block.5.layer.0.layer_norm.weight,decoder.block.5.layer.1.EncDecAttention.q.weight,decoder.block.5.layer.1.EncDecAttention.k.weight,decoder.block.5.layer.1.EncDecAttention.v.weight,decoder.block.5.layer.1.EncDecAttention.o.weight,decoder.block.5.layer.1.layer_norm.weight,decoder.block.5.layer.2.DenseReluDense.wi_0.weight,decoder.block.5.layer.2.DenseReluDense.wi_1.weight,decoder.block.5.layer.2.DenseReluDense.wo.weight,decoder.block.5.layer.2.layer_norm.weight,decoder.block.6.layer.0.SelfAttention.q.weight,decoder.block.6.layer.0.SelfAttention.k.weight,decoder.block.6.layer.0.SelfAttention.v.weight,decoder.block.6.layer.0.SelfAttention.o.weight,decoder.block.6.layer.0.layer_norm.weight,decoder.block.6.layer.1.EncDecAttention.q.weight,decoder.block.6.layer.1.EncDecAttention.k.weight,decoder.block.6.layer.1.EncDecAttention.v.weight,decoder.block.6.layer.1.EncDecAttention.o.weight,decoder.block.6.layer.1.layer_norm.weight,decoder.block.6.layer.2.DenseReluDense.wi_0.weight,decoder.block.6.layer.2.DenseReluDense.wi_1.weight,decoder.block.6.layer.2.DenseReluDense.wo.weight,decoder.block.6.layer.2.layer_norm.weight,decoder.block.7.layer.0.SelfAttention.q.weight,decoder.block.7.layer.0.SelfAttention.k.weight,decoder.block.7.layer.0.SelfAttention.v.weight,decoder.block.7.layer.0.SelfAttention.o.weight,decoder.block.7.layer.0.layer_norm.weight,decoder.block.7.layer.1.EncDecAttention.q.weight,decoder.block.7.layer.1.EncDecAttention.k.weight,decoder.block.7.layer.1.EncDecAttention.v.weight,decoder.block.7.layer.1.EncDecAttention.o.weight,decoder.block.7.layer.1.layer_norm.weight,decoder.block.7.layer.2.DenseReluDense.wi_0.weight,decoder.block.7.layer.2.DenseReluDense.wi_1.weight,decoder.block.7.layer.2.DenseReluDense.wo.weight,decoder.block.7.layer.2.layer_norm.weight,decoder.block.8.layer.0.SelfAttention.q.weight,decoder.block.8.layer.0.SelfAttention.k.weight,decoder.block.8.layer.0.SelfAttention.v.weight,decoder.block.8.layer.0.SelfAttention.o.weight,decoder.block.8.layer.0.layer_norm.weight,decoder.block.8.layer.1.EncDecAttention.q.weight,decoder.block.8.layer.1.EncDecAttention.k.weight,decoder.block.8.layer.1.EncDecAttention.v.weight,decoder.block.8.layer.1.EncDecAttention.o.weight,decoder.block.8.layer.1.layer_norm.weight,decoder.block.8.layer.2.DenseReluDense.wi_0.weight,decoder.block.8.layer.2.DenseReluDense.wi_1.weight,decoder.block.8.layer.2.DenseReluDense.wo.weight,decoder.block.8.layer.2.layer_norm.weight,decoder.block.9.layer.0.SelfAttention.q.weight,decoder.block.9.layer.0.SelfAttention.k.weight,decoder.block.9.layer.0.SelfAttention.v.weight,decoder.block.9.layer.0.SelfAttention.o.weight,decoder.block.9.layer.0.layer_norm.weight,decoder.block.9.layer.1.EncDecAttention.q.weight,decoder.block.9.layer.1.EncDecAttention.k.weight,decoder.block.9.layer.1.EncDecAttention.v.weight,decoder.block.9.layer.1.EncDecAttention.o.weight,decoder.block.9.layer.1.layer_norm.weight,decoder.block.9.layer.2.DenseReluDense.wi_0.weight,decoder.block.9.layer.2.DenseReluDense.wi_1.weight,decoder.block.9.layer.2.DenseReluDense.wo.weight,decoder.block.9.layer.2.layer_norm.weight,decoder.block.10.layer.0.SelfAttention.q.weight,decoder.block.10.layer.0.SelfAttention.k.weight,decoder.block.10.layer.0.SelfAttention.v.weight,decoder.block.10.layer.0.SelfAttention.o.weight,decoder.block.10.layer.0.layer_norm.weight,decoder.block.10.layer.1.EncDecAttention.q.weight,decoder.block.10.layer.1.EncDecAttention.k.weight,decoder.block.10.layer.1.EncDecAttention.v.weight,decoder.block.10.layer.1.EncDecAttention.o.weight,decoder.block.10.layer.1.layer_norm.weight,decoder.block.10.layer.2.DenseReluDense.wi_0.weight,decoder.block.10.layer.2.DenseReluDense.wi_1.weight,decoder.block.10.layer.2.DenseReluDense.wo.weight,decoder.block.10.layer.2.layer_norm.weight,decoder.block.11.layer.0.SelfAttention.q.weight,decoder.block.11.layer.0.SelfAttention.k.weight,decoder.block.11.layer.0.SelfAttention.v.weight,decoder.block.11.layer.0.SelfAttention.o.weight,decoder.block.11.layer.0.layer_norm.weight,decoder.block.11.layer.1.EncDecAttention.q.weight,decoder.block.11.layer.1.EncDecAttention.k.weight,decoder.block.11.layer.1.EncDecAttention.v.weight,decoder.block.11.layer.1.EncDecAttention.o.weight,decoder.block.11.layer.1.layer_norm.weight,decoder.block.11.layer.2.DenseReluDense.wi_0.weight,decoder.block.11.layer.2.DenseReluDense.wi_1.weight,decoder.block.11.layer.2.DenseReluDense.wo.weight,decoder.block.11.layer.2.layer_norm.weight,decoder.final_layer_norm.weight,lm_head.weight]. All weights are initialized. [2022-08-25 10:27:07,732 INFO] Using SimplePredict to predict... 16it [04:03, 15.23s/it]
print('Labeled samples:') ! tail -n 1 cn_dev.tsv print('Predicted results:') ! tail -n 1 cn.preds.txt
Labeled samples: 2016年食药监部门将重点从6方面开展案件查处 新华社北京2月12日电(记者徐庆松)国家食品药品监督管理总局近日发布消息称,2016年各级食品药品监管部门在案件查处方面将重点做好以下工作。要继续以查处重大案件为核心,针对与人民群众日常生活关系密切、问题突出的重点领域和重点产品,拓宽案件来源渠道,深入开展专项执法行动,严厉打击危害食品药品安全的行业“潜规则”。进一步完善行政执法与日常监管的衔接机制,坚持处罚打击和监督整改并举,形成稽查办案与日常监管合力,发挥最大效能。与司法机关密切配合,建立健全联席会议、重大案件联合督办、案件信息联合发布等机制,细化案件线索通报、案件移送、协助检验认定等程序和内容,增强联合打击食品违法犯罪合力。通过督查指导、考核激励等多种措施强力推进案件信息公开,不断健全公开机制、规范公开内容、丰富公开形式,有力震慑违法行为,保护消费合法权益。加强稽查执法业务培训和办案实战经验交流,继续开展优秀案例评选示范工作,不断提高全系统稽查执法人员的综合素质和专业技能。加强稽查办案信息化建设,实现稽查办案信息的共享、共用和快速传输,缩短办案时间,提升执法效率和水平。据介绍,当前食品违法犯罪形势依然严峻,制假售假“黑窝点”屡打不绝;有的持证企业利欲熏心、铤而走险,违法添加、滥用食品添加剂等问题屡禁不止。随着互联网的普及和造假手段的更新升级,一些重大案件呈现跨区域、链条化、网络化等新特点,违法性质恶劣,违法手段隐蔽,社会危害严重。同时,各地稽查队伍和能力建设仍存在差异,执法装备不足、检验检测手段落后等问题还比较突出。(完) 2016 年 食药监 部门 将 重点 从 6 方面 开展 案件 查处 新华社 北京 2 月 12 日电 ( 记者 徐庆松 ) 国家 食品药品 监督管理 总局 近日 发布 消息 称 , 2016 年 各级 食品药品 监管部门 在 案件 查处 方面 将 重点 做好 以下 工作 。 要 继续 以 查处 重大案件 为 核心 , 针对 与 人民 群众 日常生活 关系密切 、 问题 突出 的 重点 领域 和 重点 产品 , 拓宽 案件 来源 渠道 , 深入开展 专项 执法 行动 , 严厉打击 危害 食品药品 安全 的 行业 “ 潜规则 ” 。 进一步 完善 行政 执法 与 日常 监管 的 衔接 机制 , 坚持 处罚 打击 和 监督 整改 并举 , 形成 稽查 办案 与 日常 监管 合力 , 发挥 最大 效能 。 与 司法机关 密切配合 , 建立健全 联席会议 、 重大案件 联合 督办 、 案件 信息 联合 发布 等 机制 , 细化 案件线索 通报 、 案件 移送 、 协助 检验 认定 等 程序 和 内容 , 增强 联合 打击 食品 违法犯罪 合力 。 通过 督查 指导 、 考核 激励 等 多种 措施 强力 推进 案件 信息 公开 , 不断 健全 公开 机制 、 规范 公开 内容 、 丰富 公开 形式 , 有力 震慑 违法行为 , 保护 消费 合法权益 。 加强 稽查 执法 业务培训 和 办案 实战经验 交流 , 继续 开展 优秀 案例 评选 示范 工作 , 不断 提高 全 系统 稽查 执法人员 的 综合 素质 和 专业技能 。 加强 稽查 办案 信息化 建设 , 实现 稽查 办案 信息 的 共享 、 共用 和 快速 传输 , 缩短 办案 时间 , 提升 执法 效率 和 水平 。 据介绍 , 当前 食品 违法犯罪 形势 依然 严峻 , 制假 售假 “ 黑窝点 ” 屡 打 不绝 ; 有 的 持证 企业 利欲熏心 、 铤而走险 , 违法 添加 、 滥用 食品 添加剂 等 问题 屡禁不止 。 随着 互联网 的 普及 和 造假 手段 的 更新 升级 , 一些 重大案件 呈现 跨 区域 、 链条 化 、 网络化 等 新 特点 , 违法 性质 恶劣 , 违法 手段 隐蔽 , 社会 危害 严重 。 同时 , 各地 稽查 队伍 和 能力 建设 仍 存在 差异 , 执法 装备 不足 、 检验 检测 手段 落后 等 问题 还 比较突出 。 ( 完 ) 经济 Predicted results: 食 品 药 品 监督管理 总局 : 严厉打击 危害 食品 安 全 隐 患 的 行业 “ 潜规则 ” 食 品 药 品 监督管理 总局 : 严厉打击 危害 食品 安 全 隐 患 的 行业 “ 潜规则 ”||食 品 药 品 监督管理 总局 : 严厉打击 危害 食品 安 全 事 故 的 行业 “ 潜规则 ”||食 品 药 品 监督管理 总局 : 严厉打击 危害 食品 安全 的 行业 “ 潜规则 ”||食 品 药 品 监督管理 总局 : 严厉打击 危害 食品 安 全 事 故 行业 “ 潜规则 ”||食 品 药 品 监督管理 总局 : 严厉打击 危害 食品 安 全 事 故 的 “ 潜规则 ” 2016 年 食药监 部门 将 重点 从 6 方面 开展 案件 查处 新华社北京2月12日电(记者徐庆松)国家食品药品监督管理总局近日发布消息称,2016年各级食品药品监管部门在案件查处方面将重点做好以下工作。要继续以查处重大案件为核心,针对与人民群众日常生活关系密切、问题突出的重点领域和重点产品,拓宽案件来源渠道,深入开展专项执法行动,严厉打击危害食品药品安全的行业“潜规则”。进一步完善行政执法与日常监管的衔接机制,坚持处罚打击和监督整改并举,形成稽查办案与日常监管合力,发挥最大效能。与司法机关密切配合,建立健全联席会议、重大案件联合督办、案件信息联合发布等机制,细化案件线索通报、案件移送、协助检验认定等程序和内容,增强联合打击食品违法犯罪合力。通过督查指导、考核激励等多种措施强力推进案件信息公开,不断健全公开机制、规范公开内容、丰富公开形式,有力震慑违法行为,保护消费合法权益。加强稽查执法业务培训和办案实战经验交流,继续开展优秀案例评选示范工作,不断提高全系统稽查执法人员的综合素质和专业技能。加强稽查办案信息化建设,实现稽查办案信息的共享、共用和快速传输,缩短办案时间,提升执法效率和水平。据介绍,当前食品违法犯罪形势依然严峻,制假售假“黑窝点”屡打不绝;有的持证企业利欲熏心、铤而走险,违法添加、滥用食品添加剂等问题屡禁不止。随着互联网的普及和造假手段的更新升级,一些重大案件呈现跨区域、链条化、网络化等新特点,违法性质恶劣,违法手段隐蔽,社会危害严重。同时,各地稽查队伍和能力建设仍存在差异,执法装备不足、检验检测手段落后等问题还比较突出。(完) 经济
上面展示了数据集中的1条数据以及经过训练以后模型的预测结果。预测的标题为第一列,第二列为集束搜索(beam search)的5条结果,以||隔开。预测结果同时还包含原始新闻标题、新闻原文和新闻类别,相互之间以\t隔开。
一步执行
值得一提的是,上述所有训练/评估/预测代码,都已经被集成在EasyNLP/examples/appzoo_tutorials/sequence_generation/main.py中,此外,我们也预先编写好了多种可供直接执行的脚本。用户可以通过带参数运行上述main.py文件,或者直接执行脚本文件run_user_defined_local_zh.sh的方式,一步执行上述所有训练/评估/预测操作。
mian.py文件一步执行
用户通过以下代码带参数执行main.py中的指令,可直接对模型进行训练/评估/预测操作。 训练代码指令如下。具体的参数解释可见上文,此处不再赘述。
模型训练代码如下:
! python main.py \ --mode train \ --app_name=sequence_generation \ --worker_gpu=1 \ --tables=./cn_train.tsv,./cn_dev.tsv \ --input_schema=title_tokens:str:1,content_tokens:str:1 \ --first_sequence=content_tokens \ --second_sequence=title_tokens \ --label_name=title_tokens \ --checkpoint_dir=./finetuned_zh_model \ --micro_batch_size=8 \ --sequence_length=512 \ --epoch_num=1 \ --save_checkpoint_steps=150 \ --export_tf_checkpoint_type none \ --user_defined_parameters 'pretrain_model_name_or_path=alibaba-pai/mt5-title-generation-zh language=zh copy=false max_encoder_length=512 min_decoder_length=12 max_decoder_length=32 no_repeat_ngram_size=2 num_beams=5 num_return_sequences=5'
模型评估代码如下:
! python main.py \ --mode=evaluate \ --app_name=sequence_generation \ --worker_gpu=1 \ --tables=./cn_dev.tsv \ --input_schema=title_tokens:str:1,content_tokens:str:1 \ --first_sequence=content_tokens \ --second_sequence=title_tokens \ --label_name=title_tokens \ --checkpoint_dir=./finetuned_zh_model \ --micro_batch_size=8 \ --sequence_length=512 \ --epoch_num=1 \ --save_checkpoint_steps=150 \ --export_tf_checkpoint_type none \ --user_defined_parameters 'language=zh copy=false max_encoder_length=512 min_decoder_length=12 max_decoder_length=32 no_repeat_ngram_size=2 num_beams=5 num_return_sequences=5'
模型预测代码如下:
! python main.py \ --mode=predict \ --app_name=sequence_generation \ --worker_gpu=1 \ --tables=./cn_dev.tsv \ --outputs=./cn.preds.txt \ --input_schema=title:str:1,content:str:1,title_tokens:str:1,content_tokens:str:1,tag:str:1 \ --output_schema=predictions,beams \ --append_cols=title_tokens,content,tag \ --first_sequence=content_tokens \ --checkpoint_dir=./finetuned_zh_model \ --micro_batch_size=32 \ --sequence_length=512 \ --user_defined_parameters 'language=zh copy=false max_encoder_length=512 min_decoder_length=12 max_decoder_length=32 no_repeat_ngram_size=2 num_beams=5 num_return_sequences=5'
利用bash文件命令行执行
我们在EasyNLP/examples/appzoo_tutorials/sequence_generation文件夹下封装好了多种可直接执行的脚本,用户同样可以通过带参数执行脚本文件的方式来一步完成模型的训练/评估/预测。以下以run_user_defined_local_zh.sh脚本为例。该脚本文件需要传入两个参数,第一个参数为运行程序的GPU编号,一般为0;第二个参数代表模型的训练/评估/预测。
模型训练:
! bash examples/appzoo_tutorials/sequence_generation/run_user_defined_local_zh.sh 0 train
模型评估:
! bash examples/appzoo_tutorials/sequence_generation/run_user_defined_local_zh.sh 0 evaluate
模型预测:
! bash examples/appzoo_tutorials/sequence_generation/run_user_defined_local_zh.sh 0 predict