Skip to content

Instantly share code, notes, and snippets.

@ayghri
Last active July 16, 2025 21:31
Show Gist options
  • Save ayghri/4a0c3b2b8b9a336b914ca80ca4489992 to your computer and use it in GitHub Desktop.
Save ayghri/4a0c3b2b8b9a336b914ca80ca4489992 to your computer and use it in GitHub Desktop.
import torch
from torch import nn
torch.manual_seed(0)
# We have to define ABC as a Parameter/Variable using requires_grad=True
# This makes ABC behave like a parameter of the model, so that it becomes a node in
# the computation graph
ABC = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
F = nn.Sequential(
nn.Linear(3, 8),
nn.Sigmoid(),
nn.Linear(8, 1),
)
y = F(ABC)
# backprogation requires grad_outputs
# it's usually the d Loss / dy,
# but in our case we have no loss function: Loss = y, and d Loss / dy = [1,...]
dy_dx = torch.autograd.grad(
outputs=y, inputs=ABC, grad_outputs=torch.ones_like(y)
)
print("ABC:", ABC)
print("y:", y)
print("dy/dx:", dy_dx)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment