Posters

Everything I Googled During My First PyTorch Project

Presented by

Jai Chandnani

Experience Level:

Just starting out

Description

PyTorch is famous for being "Pythonic," but the first training run can still feel like a collection of incantations: DataLoader, nn.Module, loss, backward(), optimizer, eval(). You copy a tutorial, hit run, and something happens. But do you understand why?

This poster walks through the smallest end-to-end PyTorch project that actually learns, connecting each step to the underlying concept. Starting with a small dataset (images or tabular, the same template works for both), it traces how batches flow through a model, how a single loss value becomes gradients for thousands of parameters, and how the optimizer updates those parameters in the right direction.

Along the way, it highlights the concepts that cause the most confusion for newcomers (such as myself!): tensor shapes and why they break, logits vs. probabilities and when to use which, why gradients accumulate if you forget to zero them, and what model.train() and model.eval() actually change under the hood.

The poster will incude an annotated training loop map showing forward → loss → backward → step with the corresponding PyTorch calls, plus a tensor-shape trace so you always know what dimensions to expect. It also covers two sanity checks every beginner should run: overfitting a single batch on purpose, and verifying that parameters actually change after optimizer.step(). A troubleshooting section addresses common first-week errors, from device and dtype mismatches to wrong loss/output pairings to the classic forgotten torch.no_grad() during evaluation.

Attendees will walk away with a reusable starter template they can adapt to their own projects, and a mental model for debugging when code runs but doesn't learn. No ML background assumed, just Python and curiosity.

Search