diff --git a/_posts/2021-6-8-overview-of-pytorch-autograd-engine.md b/_posts/2021-6-8-overview-of-pytorch-autograd-engine.md index 837c0fdbf25c..ffc8ce0e962c 100644 --- a/_posts/2021-6-8-overview-of-pytorch-autograd-engine.md +++ b/_posts/2021-6-8-overview-of-pytorch-autograd-engine.md @@ -37,7 +37,7 @@ In the example above, when multiplying x and y to obtain v, the engine will exte

Figure 2: Computational graph extended after executing the logarithm

-Continuing, the engine now calculates the operation and extends the graph again with the log derivative that it knows to be . This is shown in figure 3. This operation generates the result that when propagated backward and multiplied by the multiplication derivative as in the chain rule, generates the derivatives , . +Continuing, the engine now calculates the operation and extends the graph again with the log derivative that it knows to be . This is shown in figure 3. This operation generates the result that when propagated backward and multiplied by the multiplication derivative as in the chain rule, generates the derivatives , .
@@ -111,7 +111,7 @@ We can execute the same expression in PyTorch and calculate the gradient of the
>>> y.backward(1.0)
>>> x.grad
tensor([1.3633, - 0.1912]) + 0.1912])
The result is the same as our hand-calculated Jacobian-vector product!