Pytorch全连接层实现的方法及过程是什么
Admin 2022-06-13 群英技术资讯 1088 次浏览
这篇文章主要介绍“Pytorch全连接层实现的方法及过程是什么”的相关知识,下面会通过实际案例向大家展示操作过程,操作方法简单快捷,实用性强,希望这篇“Pytorch全连接层实现的方法及过程是什么”文章能帮助大家解决问题。全连接神经网络是一种最基本的神经网络结构,英文为Full Connection,所以一般简称FC。
FC的准则很简单:神经网络中除输入层之外的每个节点都和上一层的所有节点有连接。
以上一次的MNIST为例
import torch
import torch.utils.data
from torch import optim
from torchvision import datasets
from torchvision.transforms import transforms
import torch.nn.functional as F
batch_size = 200
learning_rate = 0.001
epochs = 20
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('mnistdata', train=True, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('mnistdata', train=False, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
w1, b1 = torch.randn(200, 784, requires_grad=True), torch.zeros(200, requires_grad=True)
w2, b2 = torch.randn(200, 200, requires_grad=True), torch.zeros(200, requires_grad=True)
w3, b3 = torch.randn(10, 200, requires_grad=True), torch.zeros(10, requires_grad=True)
torch.nn.init.kaiming_normal_(w1)
torch.nn.init.kaiming_normal_(w2)
torch.nn.init.kaiming_normal_(w3)
def forward(x):
x = x@w1.t() + b1
x = F.relu(x)
x = x@w2.t() + b2
x = F.relu(x)
x = x@w3.t() + b3
x = F.relu(x)
return x
optimizer = optim.Adam([w1, b1, w2, b2, w3, b3], lr=learning_rate)
criteon = torch.nn.CrossEntropyLoss()
for epoch in range(epochs):
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(-1, 28*28)
logits = forward(data)
loss = criteon(logits, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx*len(data), len(train_loader.dataset),
100.*batch_idx/len(train_loader), loss.item()
))
test_loss = 0
correct = 0
for data, target in test_loader:
data = data.view(-1, 28*28)
logits = forward(data)
test_loss += criteon(logits, target).item()
pred = logits.data.max(1)[1]
correct += pred.eq(target.data).sum()
test_loss /= len(test_loader.dataset)
print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100.*correct/len(test_loader.dataset)
))
我们将每个w和b都进行了定义,并且自己写了一个forward函数。如果我们采用了全连接层,那么整个代码也会更加简介明了。
首先,我们定义自己的网络结构的类:
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.model = nn.Sequential(
nn.Linear(784, 200),
nn.LeakyReLU(inplace=True),
nn.Linear(200, 200),
nn.LeakyReLU(inplace=True),
nn.Linear(200, 10),
nn.LeakyReLU(inplace=True)
)
def forward(self, x):
x = self.model(x)
return x
它继承于nn.Moudle,并且自己定义里整个网络结构。
其中inplace的作用是直接复用存储空间,减少新开辟存储空间。
除此之外,它可以直接进行运算,不需要手动定义参数和写出运算语句,更加简便。
同时我们还可以发现,它自动完成了初试化,不需要像之前一样再手动写一个初始化了。
前者是一个类的接口,后者是一个函数式接口。
前者都是大写的,并且调用的的时候需要先实例化才能使用,而后者是小写的可以直接使用。
最重要的是后者的自由度更高,更适合做一些自己定义的操作。
import torch
import torch.utils.data
from torch import optim, nn
from torchvision import datasets
from torchvision.transforms import transforms
import torch.nn.functional as F
batch_size = 200
learning_rate = 0.001
epochs = 20
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('mnistdata', train=True, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('mnistdata', train=False, download=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=batch_size, shuffle=True)
class MLP(nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.model = nn.Sequential(
nn.Linear(784, 200),
nn.LeakyReLU(inplace=True),
nn.Linear(200, 200),
nn.LeakyReLU(inplace=True),
nn.Linear(200, 10),
nn.LeakyReLU(inplace=True)
)
def forward(self, x):
x = self.model(x)
return x
device = torch.device('cuda:0')
net = MLP().to(device)
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
criteon = nn.CrossEntropyLoss().to(device)
for epoch in range(epochs):
for batch_idx, (data, target) in enumerate(train_loader):
data = data.view(-1, 28*28)
data, target = data.to(device), target.to(device)
logits = net(data)
loss = criteon(logits, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch : {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx*len(data), len(train_loader.dataset),
100.*batch_idx/len(train_loader), loss.item()
))
test_loss = 0
correct = 0
for data, target in test_loader:
data = data.view(-1, 28*28)
data, target = data.to(device), target.to(device)
logits = net(data)
test_loss += criteon(logits, target).item()
pred = logits.data.max(1)[1]
correct += pred.eq(target.data).sum()
test_loss /= len(test_loader.dataset)
print('\nTest set : Averge loss: {:.4f}, Accurancy: {}/{}({:.3f}%)'.format(
test_loss, correct, len(test_loader.dataset),
100.*correct/len(test_loader.dataset)
))
补充:pytorch 实现一个隐层的全连接神经网络
torch.nn 实现 模型的定义,网络层的定义,损失函数的定义。
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
print(t, loss.item())
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
上面,我们使用parem= -= learning_rate* param.grad 手动更新参数。
使用torch.optim 自动优化参数。optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
y_pred = model(x)
loss = loss_fn(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:mmqy2019@163.com进行举报,并提供相关证据,查实之后,将立刻删除涉嫌侵权内容。
猜你喜欢
本文实例为大家分享了Python实现仓库管理系统的具体代码,供大家参考,具体内容如下注意:在Linux环境运行代码#!/usr/bin/env python# -*- coding:utf-8 -*-# @FileName :store
时间戳又被称之为是Unix时间戳,原本是在Unix系统中的计时工具。它的含义是从1970年1月1日(UTC/GMT的午夜)开始所经过的秒数,不考虑闰秒。UNIX时间戳的 0 按照ISO 8601规范为 :1970-01-01T00:00:00Z。
在玩python学习机器时,对于那种对随机性不太敏感的模型,理论上说可以不打乱。但敏感不敏感也跟数据量级,复杂度,算法内部计算机制都有关,目前并没有一个经纬分明的算法随机度敏感度列表。既然打乱数据并不会得到一个更差的结果,一般推荐的做法就是打乱全量数据。那怎么打乱呢?
这篇文章介绍了Python使用StringIO和BytesIO读写内存数据的方法,文中通过示例代码介绍的非常详细。对大家的学习或工作具有一定的参考借鉴价值,需要的朋友可以参考下
这篇文章主要介绍了pygame实现井字棋之第二步逻辑实现,文中有非常详细的代码示例,对正在学习python的小伙伴们有非常好的帮助,需要的朋友可以参考下
成为群英会员,开启智能安全云计算之旅
立即注册关注或联系群英网络
7x24小时售前:400-678-4567
7x24小时售后:0668-2555666
24小时QQ客服
群英微信公众号
CNNIC域名投诉举报处理平台
服务电话:010-58813000
服务邮箱:service@cnnic.cn
投诉与建议:0668-2555555
Copyright © QY Network Company Ltd. All Rights Reserved. 2003-2020 群英 版权所有
增值电信经营许可证 : B1.B2-20140078 粤ICP备09006778号 域名注册商资质 粤 D3.1-20240008