动手撸个MNIST分类(CPU版本+GPU版本)

简介: 动手撸个MNIST分类(CPU版本+GPU版本)

一.数据集简介


在动手撸MNIST之前相信大家对MNIST数据集也比较熟悉了,这里我也不多赘述了,只把关键的几个参数罗列一下:


类别:数字0~9 共计10个类别;


数量:总共7万张灰度图像,其中6万张训练,1万张测试,并且每张图都有标签,数字0对应标签0,数字1对应标签1,依次类推......


像素大小:28×28


通道数:单通道



二.网络简介


这里是一个比较简单的网络,其介绍大多在各论坛和论文期刊中可见,这里同样也不多赘述了


定义网络类:

import torch.nn as nn
class Net(nn.Module):
    def __init__(self, in_c=784, out_c=10):
        super(Net, self).__init__()
        # 定义全连接层
        self.fc1 = nn.Linear(in_c, 512)
        # 定义激活层
        self.act1 = nn.ReLU(inplace=True)
        self.fc2 = nn.Linear(512, 256)
        self.act2 = nn.ReLU(inplace=True)
        self.fc3 = nn.Linear(256, 128)
        self.act3 = nn.ReLU(inplace=True)
        self.fc4 = nn.Linear(128, out_c)
    def forward(self, x):
        x = self.act1(self.fc1(x))
        x = self.act2(self.fc2(x))
        x = self.act3(self.fc3(x))
        x = self.fc4(x)
        return x



三.动手撸CPU版本


网络具备,以此开始动收撸CPU版本的MNIST分类


流程


3.1 获取类(网络)


3.2 获取训练集和测试集


3.3 训练集和测试集载入(DataLoader)


3.4 定义损失函数--交叉熵函数


3.5 定义优化器--随机梯度下降


3.6 创建for循环A并创建空数组分别记录训练损失和精度


3.7 for循环A中:创建空数组分布记录测试损失和精度(注意此处应与训练分开)


3.8 设置训练次数


3.9 构建for循环B并在for循环中设置训练精度和损失值


3.10 在for循环B中:网络开始训练载入


3.11 在for循环B中:构建batch 和图像以及标签的for循环


3.12 在for循环B中:图像和标签Variable处理


3.13 在for循环B中:网络向前传播out = net(img) loss = criterion(out, label)


3.14 在for循环B中:记录误差


3.15 在for循环B中:计算分类正确率


3.16 结束B循环:此时还在A循环中,append损失和append精度


3.17 在A循环中新建循环此时的为测试集(测试集不训练)其他不变




CPU版本的demo:

import time
import torch.nn as nn
from torch import optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision.datasets import mnist
from torchvision import transforms
class Net(nn.Module):
    def __init__(self, in_c=784, out_c=10):
        super(Net, self).__init__()
        # 定义全连接层
        self.fc1 = nn.Linear(in_c, 512)
        # 定义激活层
        self.act1 = nn.ReLU(inplace=True)
        self.fc2 = nn.Linear(512, 256)
        self.act2 = nn.ReLU(inplace=True)
        self.fc3 = nn.Linear(256, 128)
        self.act3 = nn.ReLU(inplace=True)
        self.fc4 = nn.Linear(128, out_c)
    def forward(self, x):
        x = self.act1(self.fc1(x))
        x = self.act2(self.fc2(x))
        x = self.act3(self.fc3(x))
        x = self.fc4(x)
        return x
t1 = time.time()
# 搭建网络
net = Net()
# 训练集
train_set = mnist.MNIST('./data', train=True, transform=transforms.ToTensor(), download=False)
# 测试集
test_set = mnist.MNIST('./data', train=False, transform=transforms.ToTensor(), download=False)
# 训练集载入器
train_data = DataLoader(train_set, batch_size=64, shuffle=True)
# 测试集载入器
test_data = DataLoader(test_set, batch_size=64, shuffle=True)
# 定义损失函数 -- 交叉熵
criterion = nn.CrossEntropyLoss()
# 定义优化器 -- 随机梯度下降
optimizer = optim.SGD(net.parameters(), lr=0.01, weight_decay=0.00005)
# 开始训练
losses = []  # 记录训练损失
acces = []  # 记录训练精度
eval_losses = []  # 记录测试损失
eval_acces = []   # 记录测试精度
nums_epoch = 20  # 训练次数
for epoch in range(nums_epoch):
    train_loss = 0  # 设置训练损失的初始值
    train_acc = 0   # 设置训练精度的初始值
    net.train()
    for batch, (img, label) in enumerate(train_data):
        img = img.reshape(img.size(0), -1)
        img = Variable(img)
        label = Variable(label)
        # 向前传播
        out = net(img)
        loss = criterion(out, label)
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss +=loss.item()
        # 计算分类正确率
        _, pred = out.max(1)
        num_correct = (pred == label).sum().item()
        acc = num_correct / img.shape[0]
        if (batch +1) % 200 == 0:
            print('[INFO] Epoch-{}-Batch-{}: Train: Loss-{:.4f},Accuracy-{:.4f}'.format(epoch+1, batch+1, loss.item(),acc))
            train_acc += acc
    losses.append(train_acc / len(train_data))
    acces.append(train_acc / len(train_data))
    eval_loss = 0
    eval_acc = 0
    # 测试集不训练
    for img, label in test_data:
        img = img.reshape(img.size(0),-1)
        img = Variable(img)
        label = Variable(label)
        out = net(img)
        loss = criterion(out, label)
        eval_loss += loss.item()
        _, pred = out.max(1)
        num_correct = (pred == label).sum().item()
        acc = num_correct / img.shape[0]
        eval_acc += acc
    eval_losses.append(eval_loss / len(test_data))
    eval_acces.append(eval_acc / len(test_data))
    # 打印参数
    set_epoch = epoch+1
    set_lossTrain = train_loss / len(train_data)
    set_AccTrain = train_acc / len(train_data)
    set_lossEval = eval_loss / len(test_data)
    set_AccEval = eval_acc / len(test_data)
    print('[INFO] Epoch-{}: Train: Loss-{:.4f},Accuracy-{:.4f} |Test:Loss-{:.4f}, Accuracy-{:.4f}'.format(set_epoch,
    set_lossTrain, set_AccTrain, set_lossEval, set_AccEval))
t2 = time.time()
t = t2 - t1
print(t)

CPU版本的demo(输出):

[INFO] Epoch-1-Batch-200: Train: Loss-2.2869,Accuracy-0.2500
[INFO] Epoch-1-Batch-400: Train: Loss-2.2628,Accuracy-0.4062
[INFO] Epoch-1-Batch-600: Train: Loss-2.2056,Accuracy-0.5156
[INFO] Epoch-1-Batch-800: Train: Loss-1.9502,Accuracy-0.6875
[INFO] Epoch-1: Train: Loss-2.1569,Accuracy-0.0020 |Test:Loss-1.5634, Accuracy-0.6395
[INFO] Epoch-2-Batch-200: Train: Loss-0.9539,Accuracy-0.8125
[INFO] Epoch-2-Batch-400: Train: Loss-0.8835,Accuracy-0.7500
[INFO] Epoch-2-Batch-600: Train: Loss-0.5718,Accuracy-0.8281
[INFO] Epoch-2-Batch-800: Train: Loss-0.5851,Accuracy-0.8125
[INFO] Epoch-2: Train: Loss-0.8250,Accuracy-0.0034 |Test:Loss-0.5095, Accuracy-0.8504
[INFO] Epoch-3-Batch-200: Train: Loss-0.4938,Accuracy-0.7500
[INFO] Epoch-3-Batch-400: Train: Loss-0.5644,Accuracy-0.8438
[INFO] Epoch-3-Batch-600: Train: Loss-0.4656,Accuracy-0.8750
[INFO] Epoch-3-Batch-800: Train: Loss-0.4800,Accuracy-0.8438
[INFO] Epoch-3: Train: Loss-0.4432,Accuracy-0.0035 |Test:Loss-0.3720, Accuracy-0.8914
[INFO] Epoch-4-Batch-200: Train: Loss-0.3568,Accuracy-0.8750
[INFO] Epoch-4-Batch-400: Train: Loss-0.3659,Accuracy-0.8594
[INFO] Epoch-4-Batch-600: Train: Loss-0.3843,Accuracy-0.8281
[INFO] Epoch-4-Batch-800: Train: Loss-0.3291,Accuracy-0.8906
[INFO] Epoch-4: Train: Loss-0.3600,Accuracy-0.0037 |Test:Loss-0.3328, Accuracy-0.9015
[INFO] Epoch-5-Batch-200: Train: Loss-0.2843,Accuracy-0.8906
[INFO] Epoch-5-Batch-400: Train: Loss-0.2729,Accuracy-0.9375
[INFO] Epoch-5-Batch-600: Train: Loss-0.2628,Accuracy-0.9219
[INFO] Epoch-5-Batch-800: Train: Loss-0.1479,Accuracy-0.9531
[INFO] Epoch-5: Train: Loss-0.3174,Accuracy-0.0039 |Test:Loss-0.2917, Accuracy-0.9161
[INFO] Epoch-6-Batch-200: Train: Loss-0.3273,Accuracy-0.9062
[INFO] Epoch-6-Batch-400: Train: Loss-0.2906,Accuracy-0.9375
[INFO] Epoch-6-Batch-600: Train: Loss-0.2957,Accuracy-0.9062
[INFO] Epoch-6-Batch-800: Train: Loss-0.2804,Accuracy-0.9375
[INFO] Epoch-6: Train: Loss-0.2839,Accuracy-0.0039 |Test:Loss-0.2652, Accuracy-0.9247
[INFO] Epoch-7-Batch-200: Train: Loss-0.3675,Accuracy-0.8906
[INFO] Epoch-7-Batch-400: Train: Loss-0.3041,Accuracy-0.8906
[INFO] Epoch-7-Batch-600: Train: Loss-0.2421,Accuracy-0.9375
[INFO] Epoch-7-Batch-800: Train: Loss-0.1761,Accuracy-0.9219
[INFO] Epoch-7: Train: Loss-0.2561,Accuracy-0.0039 |Test:Loss-0.2401, Accuracy-0.9319
[INFO] Epoch-8-Batch-200: Train: Loss-0.1390,Accuracy-0.9531
[INFO] Epoch-8-Batch-400: Train: Loss-0.1204,Accuracy-0.9688
[INFO] Epoch-8-Batch-600: Train: Loss-0.1118,Accuracy-0.9844
[INFO] Epoch-8-Batch-800: Train: Loss-0.1276,Accuracy-0.9844
[INFO] Epoch-8: Train: Loss-0.2306,Accuracy-0.0041 |Test:Loss-0.2178, Accuracy-0.9365
[INFO] Epoch-9-Batch-200: Train: Loss-0.4543,Accuracy-0.9062
[INFO] Epoch-9-Batch-400: Train: Loss-0.3267,Accuracy-0.9219
[INFO] Epoch-9-Batch-600: Train: Loss-0.1870,Accuracy-0.9531
[INFO] Epoch-9-Batch-800: Train: Loss-0.3354,Accuracy-0.9062
[INFO] Epoch-9: Train: Loss-0.2094,Accuracy-0.0039 |Test:Loss-0.2016, Accuracy-0.9412
[INFO] Epoch-10-Batch-200: Train: Loss-0.1400,Accuracy-0.9219
[INFO] Epoch-10-Batch-400: Train: Loss-0.2871,Accuracy-0.9219
[INFO] Epoch-10-Batch-600: Train: Loss-0.1343,Accuracy-0.9531
[INFO] Epoch-10-Batch-800: Train: Loss-0.2881,Accuracy-0.8906
[INFO] Epoch-10: Train: Loss-0.1906,Accuracy-0.0039 |Test:Loss-0.1805, Accuracy-0.9460
[INFO] Epoch-11-Batch-200: Train: Loss-0.2244,Accuracy-0.9688
[INFO] Epoch-11-Batch-400: Train: Loss-0.1173,Accuracy-0.9688
[INFO] Epoch-11-Batch-600: Train: Loss-0.1551,Accuracy-0.9531
[INFO] Epoch-11-Batch-800: Train: Loss-0.1560,Accuracy-0.9531
[INFO] Epoch-11: Train: Loss-0.1748,Accuracy-0.0041 |Test:Loss-0.1693, Accuracy-0.9504
[INFO] Epoch-12-Batch-200: Train: Loss-0.2438,Accuracy-0.9688
[INFO] Epoch-12-Batch-400: Train: Loss-0.0888,Accuracy-0.9688
[INFO] Epoch-12-Batch-600: Train: Loss-0.0938,Accuracy-0.9688
[INFO] Epoch-12-Batch-800: Train: Loss-0.1019,Accuracy-0.9688
[INFO] Epoch-12: Train: Loss-0.1611,Accuracy-0.0041 |Test:Loss-0.1562, Accuracy-0.9515
[INFO] Epoch-13-Batch-200: Train: Loss-0.2955,Accuracy-0.9219
[INFO] Epoch-13-Batch-400: Train: Loss-0.3402,Accuracy-0.9062
[INFO] Epoch-13-Batch-600: Train: Loss-0.1040,Accuracy-0.9688
[INFO] Epoch-13-Batch-800: Train: Loss-0.1147,Accuracy-0.9844
[INFO] Epoch-13: Train: Loss-0.1491,Accuracy-0.0040 |Test:Loss-0.1475, Accuracy-0.9562
[INFO] Epoch-14-Batch-200: Train: Loss-0.0578,Accuracy-1.0000
[INFO] Epoch-14-Batch-400: Train: Loss-0.0836,Accuracy-0.9688
[INFO] Epoch-14-Batch-600: Train: Loss-0.1362,Accuracy-0.9688
[INFO] Epoch-14-Batch-800: Train: Loss-0.0897,Accuracy-0.9531
[INFO] Epoch-14: Train: Loss-0.1387,Accuracy-0.0041 |Test:Loss-0.1441, Accuracy-0.9561
[INFO] Epoch-15-Batch-200: Train: Loss-0.1424,Accuracy-0.9844
[INFO] Epoch-15-Batch-400: Train: Loss-0.0657,Accuracy-0.9844
[INFO] Epoch-15-Batch-600: Train: Loss-0.0836,Accuracy-0.9688
[INFO] Epoch-15-Batch-800: Train: Loss-0.1404,Accuracy-0.9688
[INFO] Epoch-15: Train: Loss-0.1289,Accuracy-0.0042 |Test:Loss-0.1301, Accuracy-0.9608
[INFO] Epoch-16-Batch-200: Train: Loss-0.1637,Accuracy-0.9219
[INFO] Epoch-16-Batch-400: Train: Loss-0.0509,Accuracy-1.0000
[INFO] Epoch-16-Batch-600: Train: Loss-0.2507,Accuracy-0.9375
[INFO] Epoch-16-Batch-800: Train: Loss-0.0801,Accuracy-0.9688
[INFO] Epoch-16: Train: Loss-0.1205,Accuracy-0.0041 |Test:Loss-0.1252, Accuracy-0.9610
[INFO] Epoch-17-Batch-200: Train: Loss-0.0761,Accuracy-0.9688
[INFO] Epoch-17-Batch-400: Train: Loss-0.0439,Accuracy-1.0000
[INFO] Epoch-17-Batch-600: Train: Loss-0.2204,Accuracy-0.9062
[INFO] Epoch-17-Batch-800: Train: Loss-0.0640,Accuracy-0.9844
[INFO] Epoch-17: Train: Loss-0.1128,Accuracy-0.0041 |Test:Loss-0.1211, Accuracy-0.9617
[INFO] Epoch-18-Batch-200: Train: Loss-0.0907,Accuracy-0.9844
[INFO] Epoch-18-Batch-400: Train: Loss-0.0587,Accuracy-0.9844
[INFO] Epoch-18-Batch-600: Train: Loss-0.0478,Accuracy-1.0000
[INFO] Epoch-18-Batch-800: Train: Loss-0.0532,Accuracy-0.9844
[INFO] Epoch-18: Train: Loss-0.1057,Accuracy-0.0042 |Test:Loss-0.1113, Accuracy-0.9654
[INFO] Epoch-19-Batch-200: Train: Loss-0.1051,Accuracy-0.9531
[INFO] Epoch-19-Batch-400: Train: Loss-0.1953,Accuracy-0.9219
[INFO] Epoch-19-Batch-600: Train: Loss-0.1334,Accuracy-0.9531
[INFO] Epoch-19-Batch-800: Train: Loss-0.1170,Accuracy-0.9531
[INFO] Epoch-19: Train: Loss-0.0991,Accuracy-0.0040 |Test:Loss-0.1087, Accuracy-0.9662
[INFO] Epoch-20-Batch-200: Train: Loss-0.0581,Accuracy-1.0000
[INFO] Epoch-20-Batch-400: Train: Loss-0.0779,Accuracy-0.9531
[INFO] Epoch-20-Batch-600: Train: Loss-0.1448,Accuracy-0.9375
[INFO] Epoch-20-Batch-800: Train: Loss-0.0859,Accuracy-0.9688
[INFO] Epoch-20: Train: Loss-0.0934,Accuracy-0.0041 |Test:Loss-0.1075, Accuracy-0.9661
143.9970304965973


四.CPU2GPU改进措施


在进行CPU2GPU的过程中基本流程不变,这里需要借助:


device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')此语句的含义为当计算机存在cuda时则用GPU,若无GPU时则用CPU。


4.1注意事项


4.1.1. 灵活使用.to(device)可以使网络运行的更为顺畅并且batch也可以更大,更加合理分配资源


4.1.2. 这里需要用到cudnn,因此需要调用:import torch.backends.cudnn as cudnn


4.1.3. 分别对网络、数据、标签、损失函数 进行to(device)


4.1.4. 合理分配更多的batch


4.2具体可如下图所示:


4.2.1对网络进行to(device)和使用cudnn

image.png

4.2.2合理分配更多的batch

image.png

4.2.3 使用到torch.nn

image.png

4.2.4 分别对训练和测试的img label 进行to(device)

image.png

image.png


GPU版本的demo


import time
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
from torch import optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torchvision.datasets import mnist
from torchvision import transforms
class Net(nn.Module):
    def __init__(self, in_c=784, out_c=10):
        super(Net, self).__init__()
        # 定义全连接层
        self.fc1 = nn.Linear(in_c, 512)
        # 定义激活层
        self.act1 = nn.ReLU(inplace=True)
        self.fc2 = nn.Linear(512, 256)
        self.act2 = nn.ReLU(inplace=True)
        self.fc3 = nn.Linear(256, 128)
        self.act3 = nn.ReLU(inplace=True)
        self.fc4 = nn.Linear(128, out_c)
    def forward(self, x):
        x = self.act1(self.fc1(x))
        x = self.act2(self.fc2(x))
        x = self.act3(self.fc3(x))
        x = self.fc4(x)
        return x
t1 = time.time()
# 搭建网络
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
net = Net()
cudnn.benchmark = True
net = net.to(device)
# 训练集
train_set = mnist.MNIST('./data', train=True, transform=transforms.ToTensor(), download=False)
# 测试集
test_set = mnist.MNIST('./data', train=False, transform=transforms.ToTensor(), download=False)
# 训练集载入器
train_data = DataLoader(train_set, batch_size=640, shuffle=True)
# 测试集载入器
test_data = DataLoader(test_set, batch_size=320, shuffle=True)
# 定义损失函数 -- 交叉熵
criterion = torch.nn.CrossEntropyLoss().to(device)
# 定义优化器 -- 随机梯度下降
optimizer = optim.SGD(net.parameters(), lr=0.01, weight_decay=0.00005)
# 开始训练
losses = []  # 记录训练损失
acces = []  # 记录训练精度
eval_losses = []  # 记录测试损失
eval_acces = []  # 记录测试精度
nums_epoch = 20  # 训练次数
for epoch in range(nums_epoch):
    train_loss = 0  # 设置训练损失的初始值
    train_acc = 0  # 设置训练精度的初始值
    net.train()
    for batch, (img, label) in enumerate(train_data):
        img = img.reshape(img.size(0), -1)
        img = Variable(img)
        img = img.to(device)
        label = Variable(label)
        label = label.to(device)
        # 向前传播
        out = net(img)
        loss = criterion(out, label)
        # 反向传播
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        # 记录误差
        train_loss += loss.item()
        # 计算分类正确率
        _, pred = out.max(1)
        num_correct = (pred == label).sum().item()
        acc = num_correct / img.shape[0]
        if (batch + 1) % 200 == 0:
            print(
                '[INFO] Epoch-{}-Batch-{}: Train: Loss-{:.4f},Accuracy-{:.4f}'.format(epoch + 1, batch + 1, loss.item(),
                                                                                      acc))
            train_acc += acc
    losses.append(train_acc / len(train_data))
    acces.append(train_acc / len(train_data))
    eval_loss = 0
    eval_acc = 0
    # 测试集不训练
    for img, label in test_data:
        img = img.reshape(img.size(0), -1)
        img = Variable(img)
        img = img.to(device)
        label = Variable(label)
        label = label.to(device)
        out = net(img)
        loss = criterion(out, label.to(device))
        eval_loss += loss.item()
        _, pred = out.max(1)
        num_correct = (pred == label).sum().item()
        acc = num_correct / img.shape[0]
        eval_acc += acc
    eval_losses.append(eval_loss / len(test_data))
    eval_acces.append(eval_acc / len(test_data))
    # 打印参数
    set_epoch = epoch + 1
    set_lossTrain = train_loss / len(train_data)
    set_AccTrain = train_acc / len(train_data)
    set_lossEval = eval_loss / len(test_data)
    set_AccEval = eval_acc / len(test_data)
    print('[INFO] Epoch-{}: Train: Loss-{:.4f},Accuracy-{:.4f} |Test:Loss-{:.4f}, Accuracy-{:.4f}'.format(set_epoch,
                                                                                                          set_lossTrain,
                                                                                                          set_AccTrain,
                                                                                                          set_lossEval,
                                                                                                          set_AccEval))
t2 = time.time()
t = t2 - t1
print(t)

GPU版本输出

[INFO] Epoch-1: Train: Loss-2.3025,Accuracy-0.0000 |Test:Loss-2.2982, Accuracy-0.1018
[INFO] Epoch-2: Train: Loss-2.2950,Accuracy-0.0000 |Test:Loss-2.2902, Accuracy-0.1034
[INFO] Epoch-3: Train: Loss-2.2867,Accuracy-0.0000 |Test:Loss-2.2812, Accuracy-0.1491
[INFO] Epoch-4: Train: Loss-2.2762,Accuracy-0.0000 |Test:Loss-2.2685, Accuracy-0.3172
[INFO] Epoch-5: Train: Loss-2.2611,Accuracy-0.0000 |Test:Loss-2.2498, Accuracy-0.4545
[INFO] Epoch-6: Train: Loss-2.2383,Accuracy-0.0000 |Test:Loss-2.2205, Accuracy-0.5208
[INFO] Epoch-7: Train: Loss-2.2020,Accuracy-0.0000 |Test:Loss-2.1736, Accuracy-0.5357
[INFO] Epoch-8: Train: Loss-2.1407,Accuracy-0.0000 |Test:Loss-2.0909, Accuracy-0.5215
[INFO] Epoch-9: Train: Loss-2.0353,Accuracy-0.0000 |Test:Loss-1.9529, Accuracy-0.5164
[INFO] Epoch-10: Train: Loss-1.8638,Accuracy-0.0000 |Test:Loss-1.7380, Accuracy-0.5524
[INFO] Epoch-11: Train: Loss-1.6231,Accuracy-0.0000 |Test:Loss-1.4687, Accuracy-0.6281
[INFO] Epoch-12: Train: Loss-1.3598,Accuracy-0.0000 |Test:Loss-1.2147, Accuracy-0.7030
[INFO] Epoch-13: Train: Loss-1.1373,Accuracy-0.0000 |Test:Loss-1.0222, Accuracy-0.7411
[INFO] Epoch-14: Train: Loss-0.9642,Accuracy-0.0000 |Test:Loss-0.8700, Accuracy-0.7768
[INFO] Epoch-15: Train: Loss-0.8361,Accuracy-0.0000 |Test:Loss-0.7663, Accuracy-0.7890
[INFO] Epoch-16: Train: Loss-0.7451,Accuracy-0.0000 |Test:Loss-0.6880, Accuracy-0.8087
[INFO] Epoch-17: Train: Loss-0.6791,Accuracy-0.0000 |Test:Loss-0.6312, Accuracy-0.8223
[INFO] Epoch-18: Train: Loss-0.6299,Accuracy-0.0000 |Test:Loss-0.5862, Accuracy-0.8302
[INFO] Epoch-19: Train: Loss-0.5907,Accuracy-0.0000 |Test:Loss-0.5545, Accuracy-0.8388
[INFO] Epoch-20: Train: Loss-0.5588,Accuracy-0.0000 |Test:Loss-0.5313, Accuracy-0.8452
84.11509966850281


自己敲的应该还不太对,还望指正


相关实践学习
部署Stable Diffusion玩转AI绘画(GPU云服务器)
本实验通过在ECS上从零开始部署Stable Diffusion来进行AI绘画创作,开启AIGC盲盒。
相关文章
|
18天前
|
弹性计算 人工智能 Serverless
阿里云ACK One:注册集群云上节点池(CPU/GPU)自动弹性伸缩,助力企业业务高效扩展
在当今数字化时代,企业业务的快速增长对IT基础设施提出了更高要求。然而,传统IDC数据中心却在业务存在扩容慢、缩容难等问题。为此,阿里云推出ACK One注册集群架构,通过云上节点池(CPU/GPU)自动弹性伸缩等特性,为企业带来全新突破。
|
1月前
|
监控 异构计算
Jetson 学习笔记(八):htop查看CPU占用情况和jtop监控CPU和GPU
在NVIDIA Jetson平台上使用htop和jtop工具来监控CPU、GPU和内存的使用情况,并提供了安装和使用这些工具的具体命令。
114 0
|
25天前
|
机器学习/深度学习 人工智能 并行计算
CPU和GPU的区别
【10月更文挑战第14天】
|
25天前
|
机器学习/深度学习 人工智能 缓存
GPU加速和CPU有什么不同
【10月更文挑战第20天】GPU加速和CPU有什么不同
47 1
|
2月前
|
人工智能 自然语言处理 文字识别
MinerU-大语言语料处理神器,CPU/GPU均可跑,开源免费“敲”好用
在7月4日举行的WAIC 2024科学前沿主论坛上,书生·浦语2.5正式发布,面向大模型研发与应用的全链条工具体系同时迎来升级。
MinerU-大语言语料处理神器,CPU/GPU均可跑,开源免费“敲”好用
|
3月前
|
机器学习/深度学习 编解码 人工智能
阿里云gpu云服务器租用价格:最新收费标准与活动价格及热门实例解析
随着人工智能、大数据和深度学习等领域的快速发展,GPU服务器的需求日益增长。阿里云的GPU服务器凭借强大的计算能力和灵活的资源配置,成为众多用户的首选。很多用户比较关心gpu云服务器的收费标准与活动价格情况,目前计算型gn6v实例云服务器一周价格为2138.27元/1周起,月付价格为3830.00元/1个月起;计算型gn7i实例云服务器一周价格为1793.30元/1周起,月付价格为3213.99元/1个月起;计算型 gn6i实例云服务器一周价格为942.11元/1周起,月付价格为1694.00元/1个月起。本文为大家整理汇总了gpu云服务器的最新收费标准与活动价格情况,以供参考。
阿里云gpu云服务器租用价格:最新收费标准与活动价格及热门实例解析
|
9天前
|
机器学习/深度学习 人工智能 弹性计算
什么是阿里云GPU云服务器?GPU服务器优势、使用和租赁费用整理
阿里云GPU云服务器提供强大的GPU算力,适用于深度学习、科学计算、图形可视化和视频处理等多种场景。作为亚太领先的云服务提供商,阿里云的GPU云服务器具备灵活的资源配置、高安全性和易用性,支持多种计费模式,帮助企业高效应对计算密集型任务。
|
9天前
|
机器学习/深度学习 人工智能 弹性计算
阿里云GPU服务器全解析_GPU价格收费标准_GPU优势和使用说明
阿里云GPU云服务器提供强大的GPU算力,适用于深度学习、科学计算、图形可视化和视频处理等场景。作为亚太领先的云服务商,阿里云GPU云服务器具备高灵活性、易用性、容灾备份、安全性和成本效益,支持多种实例规格,满足不同业务需求。
|
17天前
|
弹性计算 异构计算
2024年阿里云GPU服务器多少钱1小时?亲测价格查询方法
2024年阿里云GPU服务器每小时收费因实例规格不同而异。可通过阿里云GPU服务器页面选择“按量付费”查看具体价格。例如,NVIDIA A100的gn7e实例为34.742元/小时,NVIDIA A10的gn7i实例为12.710156元/小时。更多详情请访问阿里云官网。
57 2
|
17天前
|
机器学习/深度学习 人工智能 弹性计算
阿里云AI服务器价格表_GPU服务器租赁费用_AI人工智能高性能计算推理
阿里云AI服务器提供多种配置选项,包括CPU+GPU、CPU+FPGA等组合,支持高性能计算需求。本文汇总了阿里云GPU服务器的价格信息,涵盖NVIDIA A10、V100、T4、P4、P100等多款GPU卡,适用于人工智能、机器学习和深度学习等场景。详细价格表和实例规格见文内图表。