实验3 手写字体识别【机器学习】

本文涉及的产品
车辆物流识别,车辆物流识别 200次/月
教育场景识别,教育场景识别 200次/月
文档理解,结构化解析 100页
简介: 实验3 手写字体识别【机器学习】

代码 python实现手写数字识别(小白入门)

项目结构



1.py

需要下载joblib包

import numpy as np
from sklearn.linear_model import LogisticRegression
import os
import joblib
#数据预处理
trainData = np.loadtxt(open('digits_training.csv', 'r'), delimiter=",",skiprows=1)#装载数据
MTrain, NTrain = np.shape(trainData)  #行列数
print("训练集:",MTrain,NTrain)
xTrain = trainData[:,1:NTrain]
xTrain_col_avg = np.mean(xTrain, axis=0) #对各列求均值
xTrain =(xTrain- xTrain_col_avg)/255  #归一化
yTrain = trainData[:,0]
'''================================='''
#训练模型
model = LogisticRegression(solver='lbfgs', multi_class='multinomial', max_iter=500)
model.fit(xTrain, yTrain)
print("训练完毕")
'''================================='''
#测试模型
testData = np.loadtxt(open('digits_testing.csv', 'r'), delimiter=",",skiprows=1)
MTest,NTest = np.shape(testData)
print("测试集:",MTest,NTest)
xTest = testData[:,1:NTest]
xTest = (xTest-xTrain_col_avg) /255   # 使用训练数据的列均值进行处理
yTest = testData[:,0]
yPredict = model.predict(xTest)
errors = np.count_nonzero(yTest - yPredict) #返回非零项个数
print("预测完毕。错误:", errors, "条")
print("测试数据正确率:", (MTest - errors) / MTest)
'''================================='''
#保存模型
# 创建文件目录
dirs = 'testModel'
if not os.path.exists(dirs):
    os.makedirs(dirs)
joblib.dump(model, dirs+'/model.pkl')
print("模型已保存")

结果1






2.py

需要下载cv2包



在test下放置几张数字图片



import cv2
import numpy as np
import joblib
map=cv2.imread(r"test/img1.png")
GrayImage = cv2.cvtColor(map, cv2.COLOR_BGR2GRAY)
# 获取图片的宽和高
width, height = map.shape[:2][::-1]
ret,thresh2=cv2.threshold(GrayImage,width,height,cv2.THRESH_BINARY_INV)
Image=cv2.resize(thresh2,(28,28))
img_array = np.asarray(Image)
z=img_array.reshape(1,-1)
'''================================================'''
model = joblib.load('testModel'+'/model.pkl')
yPredict = model.predict(z)
print(yPredict)
y=str(yPredict)
cv2.putText(map,y, (10,20), cv2.FONT_HERSHEY_SIMPLEX,0.7,(0,0,255), 2, cv2.LINE_AA)
cv2.imshow("map",map)
cv2.waitKey(0)

结果2












代码 用PyTorch实现MNIST手写数字识别(非常详细)

pytorch.c

import torch
import torchvision
from torch.utils.data import DataLoader
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
n_epochs = 3
batch_size_train = 64
batch_size_test = 1000
learning_rate = 0.01
momentum = 0.5
log_interval = 10
random_seed = 1
torch.manual_seed(random_seed)
train_loader = torch.utils.data.DataLoader(
    torchvision.datasets.MNIST('./data/', train=True, download=True,
                               transform=torchvision.transforms.Compose([
                                   torchvision.transforms.ToTensor(),
                                   torchvision.transforms.Normalize(
                                       (0.1307,), (0.3081,))
                               ])),
    batch_size=batch_size_train, shuffle=True)
test_loader = torch.utils.data.DataLoader(
    torchvision.datasets.MNIST('./data/', train=False, download=True,
                               transform=torchvision.transforms.Compose([
                                   torchvision.transforms.ToTensor(),
                                   torchvision.transforms.Normalize(
                                       (0.1307,), (0.3081,))
                               ])),
    batch_size=batch_size_test, shuffle=True)
examples = enumerate(test_loader)
batch_idx, (example_data, example_targets) = next(examples)
# print(example_targets)
# print(example_data.shape)
fig = plt.figure()
for i in range(6):
    plt.subplot(2, 3, i + 1)
    plt.tight_layout()
    plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
    plt.title("Ground Truth: {}".format(example_targets[i]))
    plt.xticks([])
    plt.yticks([])
plt.show()
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
        self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
        self.conv2_drop = nn.Dropout2d()
        self.fc1 = nn.Linear(320, 50)
        self.fc2 = nn.Linear(50, 10)
    def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
        x = x.view(-1, 320)
        x = F.relu(self.fc1(x))
        x = F.dropout(x, training=self.training)
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)
network = Net()
optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum)
train_losses = []
train_counter = []
test_losses = []
test_counter = [i * len(train_loader.dataset) for i in range(n_epochs + 1)]
def train(epoch):
    network.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        optimizer.zero_grad()
        output = network(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()
        if batch_idx % log_interval == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch, batch_idx * len(data),
                                                                           len(train_loader.dataset),
                                                                           100. * batch_idx / len(train_loader),
                                                                           loss.item()))
            train_losses.append(loss.item())
            train_counter.append((batch_idx * 64) + ((epoch - 1) * len(train_loader.dataset)))
            torch.save(network.state_dict(), './model.pth')
            torch.save(optimizer.state_dict(), './optimizer.pth')
def test():
    network.eval()
    test_loss = 0
    correct = 0
    with torch.no_grad():
        for data, target in test_loader:
            output = network(data)
            test_loss += F.nll_loss(output, target, reduction='sum').item()
            pred = output.data.max(1, keepdim=True)[1]
            correct += pred.eq(target.data.view_as(pred)).sum()
    test_loss /= len(test_loader.dataset)
    test_losses.append(test_loss)
    print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))
train(1)
test()  # 不加这个,后面画图就会报错:x and y must be the same size
for epoch in range(1, n_epochs + 1):
    train(epoch)
    test()
fig = plt.figure()
plt.plot(train_counter, train_losses, color='blue')
plt.scatter(test_counter, test_losses, color='red')
plt.legend(['Train Loss', 'Test Loss'], loc='upper right')
plt.xlabel('number of training examples seen')
plt.ylabel('negative log likelihood loss')
examples = enumerate(test_loader)
batch_idx, (example_data, example_targets) = next(examples)
with torch.no_grad():
    output = network(example_data)
fig = plt.figure()
for i in range(6):
    plt.subplot(2, 3, i + 1)
    plt.tight_layout()
    plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
    plt.title("Prediction: {}".format(output.data.max(1, keepdim=True)[1][i].item()))
    plt.xticks([])
    plt.yticks([])
plt.show()
# ----------------------------------------------------------- #
continued_network = Net()
continued_optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum)
network_state_dict = torch.load('model.pth')
continued_network.load_state_dict(network_state_dict)
optimizer_state_dict = torch.load('optimizer.pth')
continued_optimizer.load_state_dict(optimizer_state_dict)
# 注意不要注释前面的“for epoch in range(1, n_epochs + 1):”部分,
# 不然报错:x and y must be the same size
# 为什么是“4”开始呢,因为n_epochs=3,上面用了[1, n_epochs + 1)
for i in range(4, 9):
    test_counter.append(i * len(train_loader.dataset))
    train(i)
    test()
fig = plt.figure()
plt.plot(train_counter, train_losses, color='blue')
plt.scatter(test_counter, test_losses, color='red')
plt.legend(['Train Loss', 'Test Loss'], loc='upper right')
plt.xlabel('number of training examples seen')
plt.ylabel('negative log likelihood loss')
plt.show()

运行结果


























运行之后项目结构




代码 自己

import torch
import torchvision
from torch.utils.data import DataLoader
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# import matplotlib.pyplot as plt
num_epochs = 6
batch_size = 100
learning_rate = 0.1
momentum = 0.5
log_interval = 10
random_seed = 1
torch.manual_seed(random_seed)
input_size=28*28
num_classes=10
train_dataset= torchvision.datasets.MNIST('./data/', train=True, download=True,
                                    transform=torchvision.transforms.Compose([
                                   torchvision.transforms.ToTensor(),
                                   torchvision.transforms.Normalize(
                                       (0.1307,), (0.3081,))
                               ]))
test_dataset=torchvision.datasets.MNIST('./data/', train=False, download=True,transform=torchvision.transforms.Compose([
                                   torchvision.transforms.ToTensor(),
                                   torchvision.transforms.Normalize(
                                       (0.1307,), (0.3081,))
                               ]))
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,batch_size=batch_size, shuffle=False)
class NeuralNet(nn.Module):
    def __init__(self,input_size,hidden_size,num_classes):
        super(NeuralNet, self).__init__()
        self.fc1=nn.Linear(input_size,hidden_size[0])
        self.fc2=nn.Linear(hidden_size[0],hidden_size[1])
        self.fc3=nn.Linear(hidden_size[1],num_classes)
        self.relu = nn.ReLU()
    def forward(self, x):
        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)
        out = self.relu(out)
        out = self.fc3(out)
        return out
model = NeuralNet(input_size, [256, 64], num_classes)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(num_epochs):
    for i,(images,labels) in enumerate(train_loader):
        images=images.reshape(-1,28*28)
        outputs=model(images)
        labels_onehot = F.one_hot(labels)
        loss=criterion(outputs,labels_onehot.float())
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        if i % 600 == 0:
            print("Epoch :{} \t Loss:{:.6f}".format(epoch, loss.item()))
torch.save(model,'model_total.ckpt')
# torch.save(net.state_dict(),'model_para.ckpt')
#-------------------------------------------------------------------
model=torch.load('model_total.ckpt')
'''
net=NeuralNet(input_size,[500,100],num_classes)
net.load_state_dict(torch.load('model_para.ckpt'))
'''
def acc(labels,outputs):
    _,predicted=torch.max(outputs.data,1)
    num=len(labels)
    right=(predicted==labels).sum().item()
    return num,right
with torch.no_grad():
    correct,total=0,0
    for images,labels in test_loader:
        images=images.reshape(-1,28*28)
        outputs=model(images)
        num,right=acc(labels,outputs)
        correct=correct+right
        total=total+num
    print('Accuracy of the network on the 10000 test images:{}%'.format(100*correct/total))

结果

Epoch :0   Loss:2.294961
Epoch :1   Loss:0.086749
Epoch :2   Loss:0.101823
Epoch :3   Loss:0.045709
Epoch :4   Loss:0.053201
Epoch :5   Loss:0.032638
Accuracy of the network on the 10000 test images:98.0%
相关文章
|
机器学习/深度学习 数据采集 数据可视化
机器学习入门----线性回归实验记录
机器学习入门----线性回归实验记录
186 0
|
机器学习/深度学习 数据采集 算法
【机器学习】基于机器学习的分类算法对比实验
【机器学习】基于机器学习的分类算法对比实验
325 6
【机器学习】基于机器学习的分类算法对比实验
|
存储 机器学习/深度学习 PyTorch
深入理解GPU内存分配:机器学习工程师的实用指南与实验
给定一个模型架构、数据类型、输入形状和优化器,你能否计算出前向传播和反向传播所需的GPU内存量?
261 0
|
机器学习/深度学习 数据可视化 测试技术
【Python机器学习】实验15 将Lenet5应用于Cifar10数据集1
【Python机器学习】实验15 将Lenet5应用于Cifar10数据集
275 0
【Python机器学习】实验15 将Lenet5应用于Cifar10数据集1
|
机器学习/深度学习 Python
【Python机器学习】实验16 卷积、下采样、经典卷积网络
【Python机器学习】实验16 卷积、下采样、经典卷积网络
172 0
|
机器学习/深度学习 数据可视化 Python
【Python机器学习】实验15 将Lenet5应用于Cifar10数据集3
【Python机器学习】实验15 将Lenet5应用于Cifar10数据集
112 0
|
机器学习/深度学习 Python
【Python机器学习】实验15 将Lenet5应用于Cifar10数据集2
【Python机器学习】实验15 将Lenet5应用于Cifar10数据集
85 0
|
机器学习/深度学习 数据可视化 Python
【Python机器学习】实验14 手写体卷积神经网络2
【Python机器学习】实验14 手写体卷积神经网络2
106 0
|
5月前
|
机器学习/深度学习 数据采集 人工智能
20分钟掌握机器学习算法指南
在短短20分钟内,从零开始理解主流机器学习算法的工作原理,掌握算法选择策略,并建立对神经网络的直观认识。本文用通俗易懂的语言和生动的比喻,帮助你告别算法选择的困惑,轻松踏入AI的大门。
|
11月前
|
机器学习/深度学习 算法 数据挖掘
K-means聚类算法是机器学习中常用的一种聚类方法,通过将数据集划分为K个簇来简化数据结构
K-means聚类算法是机器学习中常用的一种聚类方法,通过将数据集划分为K个簇来简化数据结构。本文介绍了K-means算法的基本原理,包括初始化、数据点分配与簇中心更新等步骤,以及如何在Python中实现该算法,最后讨论了其优缺点及应用场景。
1064 6

热门文章

最新文章