# 在牛顿方法的PyTorch实现中更新步骤-问答-阿里云开发者社区-阿里云

## 在牛顿方法的PyTorch实现中更新步骤

x = Variable(DoubleTensor([1]), requires_grad=True)

for i in range(5):

``````y = x - torch.cos(x)
y.backward()
x = Variable(x.data - y.data/x.grad.data, requires_grad=True)
``````

print(x.data) # tensor([0.7390851332151607], dtype=torch.float64) (correct)

x = Variable(DoubleTensor([1]), requires_grad=True)
y = x - torch.cos(x)
y.backward(retain_graph=True)

for i in range(5):

``````x.data = x.data - y.data/x.grad.data
y.data = x.data - torch.cos(x.data)
y.backward(retain_graph=True)
``````

print(x.data) # tensor([0.7417889255761136], dtype=torch.float64) (wrong)

PyTorch 算法框架/工具

• 一码平川MACHEL
2019-07-17 23:26:45

我认为你的第一个代码版本是最优的，这意味着它不会在每次运行时创建计算图。

# initial guess

guess = torch.tensor([1], dtype=torch.float64, requires_grad = True)

# function to optimize

def my_func(x):

``````return x - torch.cos(x)
``````

def newton(func, guess, runs=5):

``````for _ in range(runs):
# evaluate our function with current value of `guess`
value = my_func(guess)
value.backward()
# update our `guess` based on the gradient
guess.data -= (value / guess.grad).data
# zero out current gradient to hold new gradients in next iteration
guess.grad.data.zero_()
return guess.data # return our final `guess` after 5 updates
``````

# call starts

result = newton(my_func, guess)

# output of `result`

tensor([0.7391], dtype=torch.float64)
在每次运行中，my_func()使用当前guess值评估定义计算图的函数。一旦返回结果，我们计算梯度（使用value.backward()调用）。使用这个渐变，我们现在更新我们guess的渐变并将其调零，以便在下次调用时重新保持渐变value.backward()（即它会停止累积渐变;不会将渐变归零，它会默认开始累积渐变渐变。但是，我们想避免这种行为）。

0 0
+ 订阅