pytorch如何实现sin函数,有几种方法
Admin 2022-10-08 群英技术资讯 599 次浏览
本文旨在使用两种方法来实现sin函数的模拟,具体的模拟方法是使用机器学习来实现的,我们使用Python的torch模块进行机器学习,从而为sin确定多项式的系数。
# 这个案例相当于是使用torch来模拟sin函数进行计算啦。 # 通过3次函数来模拟sin函数,实现类似于机器学习的操作。 import torch import math dtype = torch.float # 数据的类型 device = torch.device("cpu") # 设备的类型 # device = torch.device("cuda:0") # Uncomment this to run on GPU # Create random input and output data x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) # 与numpy的linspace是类似的 y = torch.sin(x) # tensor->张量 # Randomly initialize weights # 标准的高斯函数分布。 # 随机产生一个参数,然后通过学习来进行改进参数。 a = torch.randn((), device=device, dtype=dtype) # a b = torch.randn((), device=device, dtype=dtype) # b c = torch.randn((), device=device, dtype=dtype) # c d = torch.randn((), device=device, dtype=dtype) # d learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y y_pred = a + b * x + c * x ** 2 + d * x ** 3 # 这个也是一个张量。 # 3次函数来进行模拟。 # Compute and print loss loss = (y_pred - y).pow(2).sum().item() if t % 100 == 99: print(t, loss) # 计算误差 # Backprop to compute gradients of a, b, c, d with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_a = grad_y_pred.sum() grad_b = (grad_y_pred * x).sum() grad_c = (grad_y_pred * x ** 2).sum() grad_d = (grad_y_pred * x ** 3).sum() # 计算误差。 # Update weights using gradient descent # 更新参数,每一次都要更新。 a -= learning_rate * grad_a b -= learning_rate * grad_b c -= learning_rate * grad_c d -= learning_rate * grad_d # reward # 最终的结果 print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
运行结果:
99 676.0404663085938
199 478.38140869140625
299 339.39117431640625
399 241.61537170410156
499 172.80801391601562
599 124.37007904052734
699 90.26084899902344
799 66.23435974121094
899 49.30537033081055
999 37.37403106689453
1099 28.96288299560547
1199 23.031932830810547
1299 18.848905563354492
1399 15.898048400878906
1499 13.81600570678711
1599 12.34669017791748
1699 11.309612274169922
1799 10.57749080657959
1899 10.060576438903809
1999 9.695555686950684
Result: y = -0.03098311647772789 + 0.852223813533783 x + 0.005345103796571493 x^2 + -0.09268788248300552 x^3
import torch import math dtype = torch.float device = torch.device("cpu") # device = torch.device("cuda:0") # Uncomment this to run on GPU # Create Tensors to hold input and outputs. # By default, requires_grad=False, which indicates that we do not need to # compute gradients with respect to these Tensors during the backward pass. x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin(x) # Create random Tensors for weights. For a third order polynomial, we need # 4 weights: y = a + b x + c x^2 + d x^3 # Setting requires_grad=True indicates that we want to compute gradients with # respect to these Tensors during the backward pass. a = torch.randn((), device=device, dtype=dtype, requires_grad=True) b = torch.randn((), device=device, dtype=dtype, requires_grad=True) c = torch.randn((), device=device, dtype=dtype, requires_grad=True) d = torch.randn((), device=device, dtype=dtype, requires_grad=True) learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y using operations on Tensors. y_pred = a + b * x + c * x ** 2 + d * x ** 3 # Compute and print loss using operations on Tensors. # Now loss is a Tensor of shape (1,) # loss.item() gets the scalar value held in the loss. loss = (y_pred - y).pow(2).sum() if t % 100 == 99: print(t, loss.item()) # Use autograd to compute the backward pass. This call will compute the # gradient of loss with respect to all Tensors with requires_grad=True. # After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding # the gradient of the loss with respect to a, b, c, d respectively. loss.backward() # Manually update weights using gradient descent. Wrap in torch.no_grad() # because weights have requires_grad=True, but we don't need to track this # in autograd. with torch.no_grad(): a -= learning_rate * a.grad b -= learning_rate * b.grad c -= learning_rate * c.grad d -= learning_rate * d.grad # Manually zero the gradients after updating weights a.grad = None b.grad = None c.grad = None d.grad = None print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
运行结果:
99 1702.320556640625
199 1140.3609619140625
299 765.3402709960938
399 514.934326171875
499 347.6383972167969
599 235.80038452148438
699 160.98876953125
799 110.91152954101562
899 77.36819458007812
999 54.883243560791016
1099 39.79965591430664
1199 29.673206329345703
1299 22.869291305541992
1399 18.293842315673828
1499 15.214327812194824
1599 13.1397705078125
1699 11.740955352783203
1799 10.796865463256836
1899 10.159022331237793
1999 9.727652549743652
Result: y = 0.019909318536520004 + 0.8338049650192261 x + -0.0034346890170127153 x^2 + -0.09006795287132263 x^3
以上的两种方法都只是模拟到了3次方,所以仅仅只是在x比较小的时候才比较合理,此外,由于系数是随机产生的,因此,每次运行的结果可能会有一定的差别的。
到此,关于“pytorch如何实现sin函数,有几种方法”的学习就结束了,希望能够解决大家的疑惑,另外大家动手实践也很重要,对大家加深理解和学习很有帮助。如果想要学习更多的相关知识,欢迎关注群英网络,小编每天都会给大家分享实用的文章!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:mmqy2019@163.com进行举报,并提供相关证据,查实之后,将立刻删除涉嫌侵权内容。
猜你喜欢
在本文中,云朵君将和大家一起了解装饰器的工作原理,如何将我们之前定义的定时器类 Timer 扩展为装饰器,以及如何简化计时功能,感兴趣的可以了解一下
这篇文章主要介绍了python collections模块如何使用的技巧,小编觉得collections模块的使用是比较实用的,因此分享给大家参考,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获。
这篇文章主要介绍了python爬虫之生活常识解答机器人,文中有非常详细的代码示例,对正在学习python的小伙伴们有非常好的帮助,需要的朋友可以参考下
清理重复的文件清理重复文件的优化清理重复的文件已知条件:什么都不知道,只需要知道它是文件就可以了实现方法:可以从指定路径(或最上层路径)开始读取,利用 glob 读取每个文件
这篇文章主要介绍了Python asyncio的一个坑,文章从Python编程错误开始介绍,改变与好多变不成中常犯的错误,我们今天就来分析分析吧,需要的下伙伴也可以参考一下
成为群英会员,开启智能安全云计算之旅
立即注册Copyright © QY Network Company Ltd. All Rights Reserved. 2003-2020 群英 版权所有
增值电信经营许可证 : B1.B2-20140078 粤ICP备09006778号 域名注册商资质 粤 D3.1-20240008