CPU-only:
python -m pip install --upgrade pip
pip install torch torchvision torchaudioVerify:
python -c "import torch; print('torch', torch.__version__); print('cuda available:', torch.cuda.is_available())"conda install pytorch torchvision torchaudio -c pytorchIf you want GPU/CUDA specifically, the exact install command depends on your CUDA version. (The above works great for CPU and some conda GPU setups.)
This is the simplest full workflow: create data → define model → train → save → load → predict.
import torch
import torch.nn as nn
import torch.optim as optim
# 1) Device (CPU/GPU)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Using device:", device)
# 2) Create a tiny dataset: y = 2x + 1 (with a little noise)
torch.manual_seed(42)
X = torch.linspace(-5, 5, steps=200).unsqueeze(1) # shape: (200, 1)
y = 2 * X + 1 + 0.2 * torch.randn_like(X)
X, y = X.to(device), y.to(device)
# 3) Define a simple model (1-layer linear regression)
model = nn.Sequential(
nn.Linear(1, 1)
).to(device)
# 4) Loss + optimizer
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.05)
# 5) Training loop
model.train()
for epoch in range(1, 201):
optimizer.zero_grad()
preds = model(X)
loss = criterion(preds, y)
loss.backward()
optimizer.step()
if epoch % 50 == 0:
print(f"Epoch {epoch:3d} | loss = {loss.item():.6f}")
# 6) Save the trained model weights
save_path = "linear_hello_world.pt"
torch.save(model.state_dict(), save_path)
print("Saved:", save_path)
# 7) Load weights into a fresh model instance
loaded_model = nn.Sequential(nn.Linear(1, 1)).to(device)
loaded_model.load_state_dict(torch.load(save_path, map_location=device))
loaded_model.eval()
print("Loaded model.")
# 8) Inference (predict)
with torch.no_grad():
x_test = torch.tensor([[-3.0], [0.0], [4.0]], device=device)
y_pred = loaded_model(x_test)
print("x_test:", x_test.squeeze().tolist())
print("y_pred:", y_pred.squeeze().tolist())Run it:
python hello_pytorch.py-
Tensors: data is
torch.Tensor(like NumPy but with GPU support) -
Model:
nn.Module(we usednn.Linear) -
Loss: tells “how wrong” the model is (we used MSE)
-
Optimizer: updates weights (we used SGD)
-
Training loop:
zero_grad()- forward pass
model(X) - compute
loss loss.backward()optimizer.step()
-
Save/load:
torch.save(model.state_dict(), path)model.load_state_dict(torch.load(path))
-
Inference:
model.eval()with torch.no_grad():