【英文文本分类实战】之六——模型与训练-评估-测试

本文涉及的产品
交互式建模 PAI-DSW,5000CU*H 3个月
简介: 【英文文本分类实战】之六——模型与训练-评估-测试

·请参考本系列目录:【英文文本分类实战】之一——实战项目总览

·下载本实战项目资源:神经网络实现英文文本分类.zip(pytorch)

[1] 编写模型


   1、TextRNN

  参考论文《Recurrent Neural Network for Text Classification with Multi-Task Learning》提出的TextRNN模型,我们编写TextRNN模型,代码如下:

class Config(object):
    """配置参数"""
    def __init__(self, dataset, embedding):
        self.model_name = 'TextRNN'
        self.train_path = dataset + '/data/train.csv'                                # 训练集
        self.dev_path = dataset + '/data/dev.csv'                                    # 验证集
        self.test_path = dataset + '/data/test.csv'                                  # 测试集
        self.class_list = [x.strip() for x in open(
            dataset + '/data/class.txt', encoding='utf-8').readlines()]              # 类别名单
        self.vocab_path = dataset + '/data/vocab.pkl'                                # 词表
        self.save_path = dataset + '/saved_dict/' + self.model_name + '.ckpt'        # 模型训练结果
        self.log_path = dataset + '/log/' + self.model_name
        self.embedding_pretrained = torch.tensor(
            np.load(dataset + '/data/' + embedding)["embeddings"].astype('float32'))\
            if embedding != 'random' else None                                       # 预训练词向量
        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')   # 设备
        self.dropout = 0.5                                              # 随机失活 当num_layers=1,dropout是无用的
        self.require_improvement = 1000                                 # 若超过1000batch效果还没提升,则提前结束训练
        self.num_classes = len(self.class_list)                         # 类别数
        self.n_vocab = 0                                                # 词表大小,在运行时赋值
        self.num_epochs = 10                                            # epoch数
        self.batch_size = 128                                           # mini-batch大小
        self.pad_size = 14                                              # 每句话处理成的长度(短填长切)
        self.learning_rate = 1e-3                                       # 学习率
        self.embed = self.embedding_pretrained.size(1)\
            if self.embedding_pretrained is not None else 300           # 字向量维度, 若使用了预训练词向量,则维度统一
        self.hidden_size = 128                                          # lstm隐藏层
        self.num_layers = 2                                             # lstm层数
'''Recurrent Neural Network for Text Classification with Multi-Task Learning'''
'''
    shape :
    1. embedding output shape : [batch_size, seq_len, embeding] = [128, 32, 300].
    2. lstm output shape : [batch_size, seq_len, hidden_size * 2] = [128, 32, 256] 此处的32不能再看成一句话内的32个词,已经变成了lstm的32个时刻.
    3. out[:, -1, :] output shape : [batch_size, hidden_size * 2] = [128, 256] 取句子最后时刻的 hidden state.
    other:
    1. lstm层数大小不会影响lstm的输出形状.
    2. 双向lstm会使输出形状翻倍,即hidden_size * 2.
'''
class Model(nn.Module):
    def __init__(self, config):
        super(Model, self).__init__()
        if config.embedding_pretrained is not None:
            self.embedding = nn.Embedding.from_pretrained(config.embedding_pretrained, freeze=False)
        else:
            self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
        self.lstm = nn.LSTM(config.embed, config.hidden_size, config.num_layers,
                            bidirectional=True, batch_first=True, dropout=config.dropout)
        self.fc = nn.Linear(config.hidden_size * 2, config.num_classes)
    def forward(self, x):
        x, _ = x
        out = self.embedding(x)  # [batch_size, seq_len, embeding] = [128, 32, 300]
        out, _ = self.lstm(out) # [batch_size, seq_len, hidden_size * 2]=[128, 32, 256]
        out = self.fc(out[:, -1, :]) # [batch_size, hidden_size * 2] = [128, 256]
        return out

   2、DPCNN

  参考论文《Deep Pyramid Convolutional Neural Networks for Text Categorization》提出的DPCNN模型,我们编写DPCNN模型,代码如下:

class Config(object):
    """配置参数"""
    def __init__(self, dataset, embedding):
        self.model_name = 'DPCNN'
        self.train_path = dataset + '/data/train.csv'                                # 训练集
        self.dev_path = dataset + '/data/dev.csv'                                    # 验证集
        self.test_path = dataset + '/data/test.csv'                                  # 测试集
        self.class_list = [x.strip() for x in open(
            dataset + '/data/class.txt', encoding='utf-8').readlines()]              # 类别名单
        self.vocab_path = dataset + '/data/vocab.pkl'                                # 词表
        self.save_path = dataset + '/saved_dict/' + self.model_name + '.ckpt'        # 模型训练结果
        self.log_path = dataset + '/log/' + self.model_name
        self.embedding_pretrained = torch.tensor(
            np.load(dataset + '/data/' + embedding)["embeddings"].astype('float32'))\
            if embedding != 'random' else None                                       # 预训练词向量
        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')   # 设备
        self.dropout = 0.5                                              # 随机失活
        self.require_improvement = 1000                                 # 若超过1000batch效果还没提升,则提前结束训练
        self.num_classes = len(self.class_list)                         # 类别数
        self.n_vocab = 0                                                # 词表大小,在运行时赋值
        self.num_epochs = 20                                            # epoch数
        self.batch_size = 128                                           # mini-batch大小
        self.pad_size = 14                                              # 每句话处理成的长度(短填长切)
        self.learning_rate = 1e-3                                       # 学习率
        self.embed = self.embedding_pretrained.size(1)\
            if self.embedding_pretrained is not None else 300           # 字向量维度
        self.num_filters = 250                                          # 卷积核数量(channels数)
'''Deep Pyramid Convolutional Neural Networks for Text Categorization'''
class Model(nn.Module):
    def __init__(self, config):
        super(Model, self).__init__()
        if config.embedding_pretrained is not None:
            self.embedding = nn.Embedding.from_pretrained(config.embedding_pretrained, freeze=False)
        else:
            self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
        self.conv_region = nn.Conv2d(1, config.num_filters, (3, config.embed), stride=1)
        self.conv = nn.Conv2d(config.num_filters, config.num_filters, (3, 1), stride=1)
        self.max_pool = nn.MaxPool2d(kernel_size=(3, 1), stride=2)
        self.padding1 = nn.ZeroPad2d((0, 0, 1, 1))  # top bottom
        self.padding2 = nn.ZeroPad2d((0, 0, 0, 1))  # bottom
        self.relu = nn.ReLU()
        self.fc = nn.Linear(config.num_filters, config.num_classes)
    def forward(self, x):
        x = x[0]
        x = self.embedding(x)
        x = x.unsqueeze(1)  # [batch_size, 250, seq_len, 1]
        # Region embedding 区域嵌入 3-gram
        x = self.conv_region(x)  # [batch_size, 250, seq_len-3+1, 1]
        x = self.padding1(x)  # [batch_size, 250, seq_len, 1]
        x = self.relu(x)
        x = self.conv(x)  # [batch_size, 250, seq_len-3+1, 1]
        x = self.padding1(x)  # [batch_size, 250, seq_len, 1]
        x = self.relu(x)
        x = self.conv(x)  # [batch_size, 250, seq_len-3+1, 1]
        while x.size()[2] > 2:
            x = self._block(x)
        x = x.squeeze()  # [batch_size, num_filters(250)]
        x = self.fc(x)
        return x
    def _block(self, x):
        x = self.padding2(x)
        px = self.max_pool(x)
        x = self.padding1(px)
        x = F.relu(x)
        x = self.conv(x)
        x = self.padding1(x)
        x = F.relu(x)
        x = self.conv(x)
        # Short Cut
        x = x + px
        return x

   3、TextCNN

  参考论文《Convolutional Neural Networks for Sentence Classification》提出的TextCNN模型,我们编写TextCNN模型,代码如下:

class Config(object):
    """配置参数"""
    def __init__(self, dataset, embedding):
        self.model_name = 'TextCNN'
        self.train_path = dataset + '/data/train.csv'                                # 训练集
        self.dev_path = dataset + '/data/dev.csv'                                    # 验证集
        self.test_path = dataset + '/data/test.csv'                                  # 测试集
        self.class_list = [x.strip() for x in open(
            dataset + '/data/class.txt', encoding='utf-8').readlines()]              # 类别名单
        self.vocab_path = dataset + '/data/vocab.pkl'                                # 词表
        self.save_path = dataset + '/saved_dict/' + self.model_name + '.ckpt'        # 模型训练结果
        self.log_path = dataset + '/log/' + self.model_name
        self.embedding_pretrained = torch.tensor(
            np.load(dataset + '/data/' + embedding)["embeddings"].astype('float32'))\
            if embedding != 'random' else None                                       # 预训练词向量
        self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')   # 设备
        self.dropout = 0.5                                              # 随机失活
        self.require_improvement = 1000                                 # 若超过1000batch效果还没提升,则提前结束训练
        self.num_classes = len(self.class_list)                         # 类别数
        self.n_vocab = 0                                                # 词表大小,在运行时赋值
        self.num_epochs = 20                                            # epoch数
        self.batch_size = 128                                           # mini-batch大小
        self.pad_size = 14                                              # 每句话处理成的长度(短填长切)
        self.learning_rate = 1e-3                                       # 学习率
        self.embed = self.embedding_pretrained.size(1)\
            if self.embedding_pretrained is not None else 300           # 字向量维度
        self.filter_sizes = (2, 3, 4)                                   # 卷积核尺寸
        self.num_filters = 256                                          # 卷积核数量(channels数)
'''Convolutional Neural Networks for Sentence Classification'''
class Model(nn.Module):
    def __init__(self, config):
        super(Model, self).__init__()
        if config.embedding_pretrained is not None:
            self.embedding = nn.Embedding.from_pretrained(config.embedding_pretrained, freeze=False)
        else:
            self.embedding = nn.Embedding(config.n_vocab, config.embed, padding_idx=config.n_vocab - 1)
        self.convs = nn.ModuleList(
            [nn.Conv2d(1, config.num_filters, (k, config.embed)) for k in config.filter_sizes])
        self.dropout = nn.Dropout(config.dropout)
        self.fc = nn.Linear(config.num_filters * len(config.filter_sizes), config.num_classes)
    def conv_and_pool(self, x, conv):
        x = F.relu(conv(x)).squeeze(3)
        x = F.max_pool1d(x, x.size(2)).squeeze(2)
        return x
    def forward(self, x):
        out = self.embedding(x[0])
        out = out.unsqueeze(1)
        out = torch.cat([self.conv_and_pool(out, conv) for conv in self.convs], 1)
        out = self.dropout(out)
        out = self.fc(out)
        return out

  以上模型都是按照论文复现的,其中Config类的配置是几乎相同的,其中参数有:

  ·model_name:模型名称,在训练模型时,需要设置--model model_name

  ·train_pathdev_pathtest_path:训练集、验证集、测试集的地址;

  ·class_list:读取存放类别的txt文件,主要是为了获取有几个标签;

  ·vocab_path:词典地址;

  ·save_path:模型训练结果的存放地址;

  ·embedding_pretrained:读取预训练词向量,如果设置--embedding random那么不会读取预训练词向量,会随机生成词向量,在训练中反向更新;

  ·device:设备,选择使用GPU还是CPU;

  ·dropout:随机失活率,可以加在很多层上;

  ·require_improvement:若超过1000batch效果还没提升,则提前结束训练;

  ·num_classes:类别数;

  ·num_epochs:训练的epoch数;

  ·batch_size:一个batch中有几条文本;

  ·pad_size:每句话处理成的长度(短填长切)

  ·learning_rate:学习率。

[2] 模型训练-验证-测试代码


   训练:

def train(config, model, train_iter, dev_iter, test_iter):
    start_time = time.time()
    model.train()
    optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate)
    # 学习率指数衰减,每次epoch:学习率 = gamma * 学习率
    # scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma=0.9)
    total_batch = 0  # 记录进行到多少batch
    dev_best_loss = float('inf')
    last_improve = 0  # 记录上次验证集loss下降的batch数
    flag = False  # 记录是否很久没有效果提升
    writer = SummaryWriter(log_dir=config.log_path + '/' + time.strftime('%m-%d_%H.%M', time.localtime()))
    for epoch in range(config.num_epochs):
        print('Epoch [{}/{}]'.format(epoch + 1, config.num_epochs))
        # scheduler.step() # 学习率衰减
        for i, (trains, labels) in enumerate(train_iter):
            outputs = model(trains)
            model.zero_grad()
            loss = F.cross_entropy(outputs, labels)
            # print(f"&&&&&&&&&&{epoch}&&{i}")
            loss.backward()
            # print(f"###############{epoch}##{i}")
            optimizer.step()
            if total_batch % 100 == 0:
                # 每多少轮输出在训练集和验证集上的效果
                true = labels.data.cpu()
                predic = torch.max(outputs.data, 1)[1].cpu()
                train_acc = metrics.accuracy_score(true, predic)
                dev_acc, dev_loss = evaluate(config, model, dev_iter)
                if dev_loss < dev_best_loss:
                    dev_best_loss = dev_loss
                    torch.save(model.state_dict(), config.save_path)
                    improve = '*'
                    last_improve = total_batch
                else:
                    improve = ''
                time_dif = get_time_dif(start_time)
                msg = 'Iter: {0:>6},  Train Loss: {1:>5.2},  Train Acc: {2:>6.2%},  Val Loss: {3:>5.2},  Val Acc: {4:>6.2%},  Time: {5} {6}'
                print(msg.format(total_batch, loss.item(), train_acc, dev_loss, dev_acc, time_dif, improve))
                writer.add_scalar("loss/train", loss.item(), total_batch)
                writer.add_scalar("loss/dev", dev_loss, total_batch)
                writer.add_scalar("acc/train", train_acc, total_batch)
                writer.add_scalar("acc/dev", dev_acc, total_batch)
                model.train()
            total_batch += 1
            if total_batch - last_improve > config.require_improvement:
                # 验证集loss超过1000batch没下降,结束训练
                print("No optimization for a long time, auto-stopping...")
                flag = True
                break
        if flag:
            break
    writer.close()
    test(config, model, test_iter)

   评估:

def evaluate(config, model, data_iter, test=False):
    model.eval()
    loss_total = 0
    predict_all = np.array([], dtype=int)
    labels_all = np.array([], dtype=int)
    with torch.no_grad():
        for texts, labels in data_iter:
            outputs = model(texts)
            loss = F.cross_entropy(outputs, labels)
            loss_total += loss
            labels = labels.data.cpu().numpy()
            predic = torch.max(outputs.data, 1)[1].cpu().numpy()
            labels_all = np.append(labels_all, labels)
            predict_all = np.append(predict_all, predic)
    acc = metrics.accuracy_score(labels_all, predict_all)
    if test:
        report = metrics.classification_report(labels_all, predict_all, target_names=config.class_list, digits=4)
        confusion = metrics.confusion_matrix(labels_all, predict_all)
        return acc, loss_total / len(data_iter), report, confusion
    return acc, loss_total / len(data_iter)

   评估:

def test(config, model, test_iter):
    # test
    model.load_state_dict(torch.load(config.save_path))
    model.eval()
    start_time = time.time()
    test_acc, test_loss, test_report, test_confusion = evaluate(config, model, test_iter, test=True)
    msg = 'Test Loss: {0:>5.2},  Test Acc: {1:>6.2%}'
    print(msg.format(test_loss, test_acc))
    print("Precision, Recall and F1-Score...")
    print(test_report)
    print("Confusion Matrix...")
    print(test_confusion)
    time_dif = get_time_dif(start_time)
    print("Time usage:", time_dif)

   查看输出:每过100轮会打印一次

Epoch [1/10]
Iter:      0,  Train Loss:   2.1,  Train Acc: 12.50%,  Val Loss:   2.1,  Val Acc: 15.15%,  Time: 0:00:04 *
Iter:    100,  Train Loss:   0.9,  Train Acc: 69.53%,  Val Loss:  0.99,  Val Acc: 65.16%,  Time: 0:00:06 *
Iter:    200,  Train Loss:   0.9,  Train Acc: 68.75%,  Val Loss:  0.86,  Val Acc: 70.27%,  Time: 0:00:08 *

[3] 如何运行代码


   模型主要有两个参数:

  ·model:模型名称;

  ·embedding:预训练词向量名称或者random

   在项目的run.py文件运行时同时添加参数,如下图:

image.png


相关实践学习
基于函数计算一键部署掌上游戏机
本场景介绍如何使用阿里云计算服务命令快速搭建一个掌上游戏机。
建立 Serverless 思维
本课程包括: Serverless 应用引擎的概念, 为开发者带来的实际价值, 以及让您了解常见的 Serverless 架构模式
相关文章
|
1月前
|
人工智能 搜索推荐 测试技术
模拟试错(STE)法让7B大模型测试超GPT-4
【2月更文挑战第24天】模拟试错(STE)法让7B大模型测试超GPT-4
29 1
模拟试错(STE)法让7B大模型测试超GPT-4
|
1月前
|
安全 测试技术
BOSHIDA DC电源模块的安全性能评估与测试方法
BOSHIDA DC电源模块的安全性能评估与测试方法
 BOSHIDA DC电源模块的安全性能评估与测试方法
|
1月前
|
安全
DC电源模块的安全性能评估与测试方法
DC电源模块的安全性能评估与测试方法 DC电源模块的安全性能评估与测试方法应包括以下几个方面: 1. 输入安全性测试:包括输入电压范围、输入电压稳定性、输入电流范围、输入电流保护等方面的测试。测试方法可以是逐步增加输入电压或输入电流,观察模块的工作状态和保护功能。
DC电源模块的安全性能评估与测试方法
|
30天前
|
传感器 算法 计算机视觉
基于肤色模型和中值滤波的手部检测算法FPGA实现,包括tb测试文件和MATLAB辅助验证
该内容是关于一个基于肤色模型和中值滤波的手部检测算法的描述,包括算法的运行效果图和所使用的软件版本(matlab2022a, vivado2019.2)。算法分为肤色分割和中值滤波两步,其中肤色模型在YCbCr色彩空间定义,中值滤波用于去除噪声。提供了一段核心程序代码,用于处理图像数据并在FPGA上实现。最终,检测结果输出到&quot;hand.txt&quot;文件。
|
1月前
|
测试技术
模型驱动测试:引领软件质量的新潮流
模型驱动测试:引领软件质量的新潮流
24 2
|
6天前
R语言估计多元标记的潜过程混合效应模型(lcmm)分析心理测试的认知过程
R语言估计多元标记的潜过程混合效应模型(lcmm)分析心理测试的认知过程
25 0
|
16天前
|
人工智能 搜索推荐 算法
人工智能,应该如何测试?(七)大模型客服系统测试
这篇文稿讨论了企业级对话机器人的知识引擎构建,强调了仅靠大模型如 GPT 是不够的,需要专业领域的知识库。知识引擎的构建涉及文档上传、解析、拆分和特征向量等步骤。文档解析是难点,因文档格式多样,需将内容自动提取。文档拆分按语义切片,以便针对性地回答用户问题。词向量用于表示词的关联性,帮助模型理解词义关系。知识引擎构建完成后,通过语义检索模型或问答模型检索答案。测试环节涵盖文档解析的准确性、问答模型的正确率及意图识别模型的性能。整个过程包含大量模型组合和手动工作,远非简单的自动化任务。
32 0
|
29天前
|
运维 监控 安全
【软件设计师备考 专题 】可靠性测试和可靠性评估
【软件设计师备考 专题 】可靠性测试和可靠性评估
55 0
|
1月前
|
测试技术
模型驱动测试引领测试开发新风向
模型驱动测试引领测试开发新风向
19 3
|
1月前
|
运维 数据库
Powershell实战:测试网络请求两个命令介绍
【2月更文挑战第11篇】 Test-Connection 命令将 Internet 控制消息协议 (ICMP) 回显请求数据包或 ping 发送给一台或多台远程计算机并返回回显响应回复。 我们可以使用该命令确定是否可通过 IP 网络ping通特定的计算机。