【PyTorch】Neural Network 神经网络(下)

简介: 【PyTorch】Neural Network 神经网络(下)

6、Linear Layers - Linear

参考文档:https://pytorch.org/docs/stable/generated/torch.nn.Linear.html#torch.nn.Linear

import torch
import torchvision
from torch import nn
from torch.nn import Linear
from torch.utils.data import DataLoader
dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)
dataloader = DataLoader(dataset, batch_size=64)
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.liner1 = Linear(in_features=196608, out_features=10)
    def forward(self, input):
        output = self.liner1(input)
        return output
tudui = Tudui()
for data in dataloader:
    imgs, targets = data
    print(imgs.shape)  # torch.Size([64, 3, 32, 32])
    output = torch.reshape(imgs, (1, 1, 1, -1))
    print(output.shape)  # torch.Size([1, 1, 1, 196608])
    output = tudui(output)
    print(output.shape)  # torch.Size([1, 1, 1, 10])
Files already downloaded and verified
torch.Size([64, 3, 32, 32])
torch.Size([1, 1, 1, 196608])
torch.Size([1, 1, 1, 10])
...

6.1 flatten

参考文档:https://pytorch.org/docs/stable/generated/torch.flatten.html?highlight=flatten#torch.flatten

output = torch.reshape(imgs, (1, 1, 1, -1))
print(output.shape)  # torch.Size([1, 1, 1, 196608])
改为
output = torch.flatten(imgs)
print(output.shape)  # torch.Size([196608]) output --> torch.Size([10])

7、CIFAR 10 Model and Sequential

CIFAR 10:https://www.cs.toronto.edu/~kriz/cifar.html

import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.conv1 = Conv2d(in_channels=3, out_channels=32, kernel_size=5, padding=2)
        self.maxpool1 = MaxPool2d(kernel_size=2)
        self.conv2 = Conv2d(in_channels=32, out_channels=32, kernel_size=5, padding=2)
        self.maxpool2 = MaxPool2d(kernel_size=2)
        self.conv3 = Conv2d(in_channels=32, out_channels=64, kernel_size=5, padding=2)
        self.maxpool3 = MaxPool2d(kernel_size=2)
        self.flatten = Flatten()
        self.linear1 = Linear(1024, 64)
        self.linear2 = Linear(64, 10)
    def forward(self, x):
        x = self.conv1(x)
        x = self.maxpool1(x)
        x = self.conv2(x)
        x = self.maxpool2(x)
        x = self.conv3(x)
        x = self.maxpool3(x)
        x = self.flatten(x)
        x = self.linear1(x)
        x = self.linear2(x)
        return x
tudui = Tudui()
print(tudui)
input = torch.ones((64, 3, 32, 32))
output = tudui(input)
print(output.shape)
Tudui(
  (conv1): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (maxpool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv2): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (maxpool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (conv3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (maxpool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (flatten): Flatten(start_dim=1, end_dim=-1)
  (linear1): Linear(in_features=1024, out_features=64, bias=True)
  (linear2): Linear(in_features=64, out_features=10, bias=True)
)
torch.Size([64, 10])

Sequential:https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html#torch.nn.Sequential

import torch
from torch import nn
from torch.nn import Conv2d, MaxPool2d, Flatten, Linear, Sequential
from torch.utils.tensorboard import SummaryWriter
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )
    def forward(self, x):
        x = self.model1(x)
        return x
tudui = Tudui()
print(tudui)
input = torch.ones((64, 3, 32, 32))
output = tudui(input)
print(output.shape)
writer = SummaryWriter("../logs")
writer.add_graph(tudui, input)
writer.close()
Tudui(
  (model1): Sequential(
    (0): Conv2d(3, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (4): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
    (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (6): Flatten(start_dim=1, end_dim=-1)
    (7): Linear(in_features=1024, out_features=64, bias=True)
    (8): Linear(in_features=64, out_features=10, bias=True)
  )
)
torch.Size([64, 10])

8、Loss Functions

8.1 L1Loss

参考文档:https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html#torch.nn.L1Loss

import torch
from torch.nn import L1Loss
input = torch.tensor([1, 2, 3], dtype=torch.float32)  # torch.Size([3])
target = torch.tensor([1, 2, 5], dtype=torch.float32)  # torch.Size([3])
input = torch.reshape(input, (1, 1, 1, 3))  # torch.Size([1, 1, 1, 3])
target = torch.reshape(target, (1, 1, 1, 3))  # torch.Size([1, 1, 1, 3])
loss1 = L1Loss(reduction='mean')  # 默认为mean
result = loss1(input, target)
print(result)
loss2 = L1Loss(reduction='sum')  # sum
result = loss2(input, target)
print(result)
tensor(0.6667)
tensor(2.)

8.2 MSELoss

参考文档:https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss

import torch
from torch.nn import MSELoss
input = torch.tensor([1, 2, 3], dtype=torch.float32)
target = torch.tensor([1, 2, 5], dtype=torch.float32)
input = torch.reshape(input, (1, 1, 1, 3))
target = torch.reshape(target, (1, 1, 1, 3))
loss = MSELoss()
result = loss(input, target)
print(result)
tensor(1.3333)

8.3 CrossEntropyLoss

参考文档:https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss

计算公式:

import torch
from torch.nn import CrossEntropyLoss
input = torch.tensor([0.1, 0.2, 0.3])
target = torch.tensor([1])
input = torch.reshape(input, (1, 3))
cross = CrossEntropyLoss()
result = cross(input, target)
print(result)
tensor(1.1019)

8.4 Sequential

import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear
from torch.utils.data import DataLoader
dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)
dataloader = DataLoader(dataset, batch_size=1)
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )
    def forward(self, x):
        x = self.model1(x)
        return x
tudui = Tudui()
for data in dataloader:
    imgs, targets = data
    output = tudui(imgs)
    print(output)
    print(targets)
Files already downloaded and verified
tensor([[-0.0715,  0.0221, -0.0562, -0.0901,  0.0627, -0.0606,  0.0137,  0.0783,
         -0.0951, -0.1070]], grad_fn=<AddmmBackward0>)
tensor([3])
tensor([[-0.0715,  0.0304, -0.0729, -0.0767,  0.0554, -0.0834, -0.0089,  0.0624,
         -0.0777, -0.0848]], grad_fn=<AddmmBackward0>)
tensor([8])
...

加入Loss Functions:

import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear, CrossEntropyLoss
from torch.utils.data import DataLoader
dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)
dataloader = DataLoader(dataset, batch_size=1)
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )
    def forward(self, x):
        x = self.model1(x)
        return x
loss = CrossEntropyLoss()
tudui = Tudui()
for data in dataloader:
    imgs, targets = data
    output = tudui(imgs)
    result_loss = loss(output, targets)
    print(result_loss)
Files already downloaded and verified
tensor(2.3437, grad_fn=<NllLossBackward0>)
tensor(2.3600, grad_fn=<NllLossBackward0>)
tensor(2.3680, grad_fn=<NllLossBackward0>)
...

8.5 backward

import torchvision
from torch import nn
from torch.nn import Sequential, Conv2d, MaxPool2d, Flatten, Linear, CrossEntropyLoss
from torch.utils.data import DataLoader
dataset = torchvision.datasets.CIFAR10("../data", train=False, transform=torchvision.transforms.ToTensor(),
                                       download=True)
dataloader = DataLoader(dataset, batch_size=1)
class Tudui(nn.Module):
    def __init__(self):
        super(Tudui, self).__init__()
        self.model1 = Sequential(
            Conv2d(3, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 32, 5, padding=2),
            MaxPool2d(2),
            Conv2d(32, 64, 5, padding=2),
            MaxPool2d(2),
            Flatten(),
            Linear(1024, 64),
            Linear(64, 10)
        )
    def forward(self, x):
        x = self.model1(x)
        return x
loss = CrossEntropyLoss()
tudui = Tudui()
for data in dataloader:
    imgs, targets = data
    output = tudui(imgs)
    result_loss = loss(output, targets)
    result_loss.backward()  # 反向传播
    print("OK")

目录
相关文章
|
3月前
|
机器学习/深度学习 PyTorch 算法框架/工具
PyTorch 中的动态计算图:实现灵活的神经网络架构
【8月更文第27天】PyTorch 是一款流行的深度学习框架,它以其灵活性和易用性而闻名。与 TensorFlow 等其他框架相比,PyTorch 最大的特点之一是支持动态计算图。这意味着开发者可以在运行时定义网络结构,这为构建复杂的模型提供了极大的便利。本文将深入探讨 PyTorch 中动态计算图的工作原理,并通过一些示例代码展示如何利用这一特性来构建灵活的神经网络架构。
278 1
|
1月前
|
机器学习/深度学习 Python
深度学习笔记(九):神经网络剪枝(Neural Network Pruning)详细介绍
神经网络剪枝是一种通过移除不重要的权重来减小模型大小并提高效率的技术,同时尽量保持模型性能。
52 0
深度学习笔记(九):神经网络剪枝(Neural Network Pruning)详细介绍
|
3月前
|
机器学习/深度学习 人工智能 PyTorch
【深度学习】使用PyTorch构建神经网络:深度学习实战指南
PyTorch是一个开源的Python机器学习库,特别专注于深度学习领域。它由Facebook的AI研究团队开发并维护,因其灵活的架构、动态计算图以及在科研和工业界的广泛支持而受到青睐。PyTorch提供了强大的GPU加速能力,使得在处理大规模数据集和复杂模型时效率极高。
193 59
|
2月前
|
机器学习/深度学习
小土堆-pytorch-神经网络-损失函数与反向传播_笔记
在使用损失函数时,关键在于匹配输入和输出形状。例如,在L1Loss中,输入形状中的N代表批量大小。以下是具体示例:对于相同形状的输入和目标张量,L1Loss默认计算差值并求平均;此外,均方误差(MSE)也是常用损失函数。实战中,损失函数用于计算模型输出与真实标签间的差距,并通过反向传播更新模型参数。
|
3月前
|
机器学习/深度学习 PyTorch 测试技术
深度学习入门:使用 PyTorch 构建和训练你的第一个神经网络
【8月更文第29天】深度学习是机器学习的一个分支,它利用多层非线性处理单元(即神经网络)来解决复杂的模式识别问题。PyTorch 是一个强大的深度学习框架,它提供了灵活的 API 和动态计算图,非常适合初学者和研究者使用。
53 0
|
3月前
|
存储 Linux 网络安全
【Azure 存储服务】如何把开启NFS 3.0协议的Azure Blob挂载在Linux VM中呢?(NFS: Network File System 网络文件系统)
【Azure 存储服务】如何把开启NFS 3.0协议的Azure Blob挂载在Linux VM中呢?(NFS: Network File System 网络文件系统)
|
5月前
|
机器学习/深度学习 PyTorch 算法框架/工具
【从零开始学习深度学习】26.卷积神经网络之AlexNet模型介绍及其Pytorch实现【含完整代码】
【从零开始学习深度学习】26.卷积神经网络之AlexNet模型介绍及其Pytorch实现【含完整代码】
|
5月前
|
机器学习/深度学习 PyTorch 算法框架/工具
【从零开始学习深度学习】28.卷积神经网络之NiN模型介绍及其Pytorch实现【含完整代码】
【从零开始学习深度学习】28.卷积神经网络之NiN模型介绍及其Pytorch实现【含完整代码】
|
3月前
|
机器学习/深度学习 PyTorch 算法框架/工具
PyTorch代码实现神经网络
这段代码示例展示了如何在PyTorch中构建一个基础的卷积神经网络(CNN)。该网络包括两个卷积层,分别用于提取图像特征,每个卷积层后跟一个池化层以降低空间维度;之后是三个全连接层,用于分类输出。此结构适用于图像识别任务,并可根据具体应用调整参数与层数。
|
3月前
|
机器学习/深度学习 数据可视化 Python
如何可视化神经网络的神经元节点之间的连接?附有Python预处理代码
该博客展示了如何通过Python预处理神经网络权重矩阵并将其导出为表格,然后使用Chiplot网站来可视化神经网络的神经元节点之间的连接。
57 0
如何可视化神经网络的神经元节点之间的连接?附有Python预处理代码

热门文章

最新文章

下一篇
无影云桌面