Backpropagation Part 3: Systems, Stability, Interpretability, Frontiers
Theory assumes infinite precision; hardware delivers float16. Bridge the gap between mathematical backprop and production systems. In this post, we cover a lot of "practical" ground from PyTorch's tape to mixed precision training, from numerical disasters to systematic testing, from gradient monitoring to interpretability. What breaks, why, and how to fix it.
From Concepts to Systems
You understand backprop now. You can trace gradients through a computation graph, derive VJPs for new operations, and implement a training loop from scratch. You've built the mental model. The math makes sense.
Then you try to train something real. Your loss suddenly spikes to NaN at step 247. Your model fits perfectly on a toy dataset but runs out of memory on the full batch. Gradients that should flow smoothly through your carefully designed architecture somehow vanish in the middle layers. You try mixed precision to speed things up and everything explodes. The framework says your custom layer's gradient is wrong but won't tell you where or why.
Here is the gap: understanding backprop and productionizing backprop are different skills. The math you learned in Parts 1 and 2 is true and necessary. But it assumes infinite precision, unlimited memory, and perfect numerical stability. Real training has none of these.
This is where many implementations quietly break. Not with dramatic failures that crash immediately, but with silent numerical drift that makes your model train 5% slower, converge 10% worse, or mysteriously plateau before reaching good performance. You might blame the architecture, the optimizer, or the learning rate schedule, while the actual issue is different: gradients corrupted by numerical errors that are easy to miss.
Part 3 bridges this gap. We'll map the conceptual backward pass you understand to how PyTorch and other libraries actually implement it. We'll learn which operations need custom gradients and how to test they're correct. We'll use mixed precision without destroying numerical stability, trade memory for compute when needed and build monitoring systems that catch gradient pathologies before they waste days of training.
The best part? Everything from Part 1 and Part 2 translates directly. Those local rules and VJP patterns become "saved tensors" and backward closures in real frameworks. The chain rule we traced by hand is exactly what the autograd tape records. The stability principles we'll cover aren't hacks, they're careful applications of the same math, just with finite precision in mind.
Think of it like this: Parts 1 and 2 taught you to drive on an empty road in perfect conditions. Part 3 teaches you to drive in traffic, in rain, with limited fuel, while monitoring your engine temperature. Same vehicle, same physics, but a whole new set of skills to master. Let's dive in!
Autodiff Systems in Practice
We've explored the mathematics of backpropagation: computational graphs, VJPs, the chain rule flowing backward. But when you type loss.backward() in PyTorch or grad(loss) in JAX, what actually happens? How do these frameworks turn your Python code into efficient gradient computations?
This section bridges the theory to the tools. Every framework (PyTorch, JAX, TensorFlow) implements the same core ideas covered earlier, but with different engineering choices. Understanding these choices helps you debug when things go wrong, optimize when things are slow, and extend when the defaults aren't enough.
The story starts with a fundamental choice: when do you build the computational graph?
Define by Run vs Static Graphs
The biggest architectural split in autodiff frameworks is when they construct the computational graph. This choice affects everything: debugging experience, optimization opportunities, even what kinds of models you can easily express.
The Dynamic Approach: Building as You Go
PyTorch popularized define-by-run (also called dynamic graphs or eager execution). Your Python code IS the graph definition:
Every operation immediately executes and adds a node to an implicit graph. The graph exists only for this specific forward pass. Run the same code with different control flow, get a different graph:
Here's exactly how PyTorch builds the graph as your code runs:
Python control flow just works. Loops, conditionals, recursion, all behave exactly as you'd expect. The graph faithfully represents what your code actually did, not what it might do.
The cost: No optimization before execution. The framework sees only one operation at a time, making it harder to fuse operations or optimize memory usage. It's like compiling your code one line at a time instead of seeing the whole program.
The Static Approach: Plan Then Execute
TensorFlow 1.x took the opposite approach: define your entire computation symbolically, then execute it:
You build a complete blueprint of your computation. The framework sees the entire graph before running anything, enabling powerful optimizations:
- Fuse multiple operations into single kernels
- Reorder operations for better memory access
- Compile to optimized machine code
- Distribute across devices intelligently
But the cost was huge: Python control flow didn't work naturally. You needed special graph operations for conditionals (tf.cond) and loops (tf.while_loop). Debugging was painful because errors appeared during graph execution, not construction.
The Hybrid Reality: Best of Both Worlds
Modern frameworks learned from both approaches. PyTorch added TorchScript for static optimization when needed. TensorFlow 2.x adopted eager execution by default but kept XLA compilation. JAX took a unique middle path:
JAX treats everything as functional transformations. Your forward pass is a pure function. grad transforms it into its gradient function. jit compiles it for speed. The functions compose: you can JIT a gradient, or take gradients of JIT-compiled code.
The lesson: there's no universally best approach. Dynamic graphs excel at research and debugging. Static graphs excel at production and optimization. Modern frameworks provide both, letting you choose based on your needs.
What Lives on the Tape
When PyTorch executes y = x * 2, it doesn't just compute the result. It records information for the backward pass. This recording is called the "tape" (or "autograd graph"). Understanding what gets saved explains both memory usage and why certain operations are differentiable.
The Tape Entry Anatomy
Each operation on the tape stores exactly what it needs for its backward pass, no more, no less:
The framework is smart about what to save:
This selective saving is crucial for memory efficiency. A ReLU layer with a million neurons saves a million bits (125KB) instead of a million floats (4MB).
The Tape's Lifetime
The tape exists only as long as you might need gradients:
This is why you need retain_graph=True for multiple backward passes: it keeps the tape alive:
No Tape, No Gradient
Operations without gradient tracking don't create tape entries:
This matters for memory and speed. Inference should always use no_grad() to avoid building a useless tape:
Custom Gradients: When Autodiff Isn't Enough
Sometimes you need to override the automatic gradient. Maybe you have a more efficient formula. Maybe the automatic version is numerically unstable. Maybe you're wrapping a black-box operation. All frameworks provide hooks for custom gradients.
The Basic Pattern
Here's how you define custom gradients in PyTorch:
When Custom Gradients Matter
Real examples where custom gradients are essential:
Numerical Stability
Straight-Through Estimators
A special case of custom gradients: operations that aren't differentiable but need gradients for training:
This trick enables training with discrete operations (quantization, binarization, sampling) by providing a smooth proxy gradient.
Mixed Precision: The Balancing Act
Modern GPUs have special hardware for 16 bit operations that is 2 to 10× faster than 32 bit. But training in pure FP16 is numerically unstable. Mixed precision training uses both: compute in FP16 for speed, accumulate in FP32 for stability.
The Precision Hierarchy
Different parts of training need different precision:
Why this works:
- Forward: Most values are in reasonable range (0.001 to 1000)
- Gradients: Can be tiny (10^-8) but relative precision matters more than absolute
- Parameters: Need high precision for small accumulated updates
Loss Scaling: Preventing Underflow
The killer for FP16 training: gradients underflowing to zero. FP16's smallest positive value is ~6e-8. Many gradients are smaller. The solution: scale the loss up before backward, scale gradients down before updates:
But static scaling is fragile. If gradients are too large, they overflow even with scaling. Too small, and scaling doesn't help. Dynamic loss scaling adjusts automatically:
The scaler's strategy:
- Start with a large scale (65536)
- If gradients overflow (inf/nan), skip the update and halve the scale
- If N steps succeed without overflow, double the scale
- Find the sweet spot automatically
Modern frameworks handle this automatically with autocast regions:
Memory Optimization: Trading Compute for Space
The bitter truth about modern deep learning: memory, not compute, is often the bottleneck. A V100 GPU can do 130 TFLOPS but has only 32GB of memory. You run out of memory long before you run out of compute. This section covers techniques to fit larger models into limited memory.
Gradient Checkpointing: Recompute to Remember Less
The math was covered in an earlier section, but here's how you actually use it:
The key decision: which layers to checkpoint?
Gradient Accumulation: Fake Larger Batches
When you can't fit your desired batch size, accumulate gradients over multiple smaller batches:
This gives you the gradient of a large batch using memory for a small batch. The trade-off: batch norm statistics are computed on the small batch, which can hurt performance.
Memory Profiling: Find the Leaks
Before optimizing, measure:
Common memory wastes to look for:
The key insight for memory optimization: activation storage dominates during training. Parameters are fixed size, but activations grow with batch size and sequence length. That's why techniques like checkpointing and mixed precision have such large impact: they directly reduce activation memory.
Modern training is a careful sequence: use mixed precision for 2× memory win, add gradient checkpointing for another 2 to 4× depending on depth, accumulate gradients to simulate larger batches, and profile constantly to ensure you're not leaking memory. If you get this right, you can train models that seemed impossible at one point on your hardware.
Numerical Stability and Testing
We've explored how backprop works mathematically, how to implement it, and how to manage gradient pathologies. In practice: even if you get all the math right, numerical computation on finite precision hardware can silently distort gradients. A single overflow in a softmax can cascade into NaN losses. An accumulation of rounding errors can make gradients point in the wrong direction. A mismatch between your mental model and your actual implementation can waste weeks of debugging.
This section is about trust. How do you know your gradients are correct? How do you prevent numerical disasters before they happen? How do you systematically test layers to catch bugs early? These aren't glamorous topics like attention mechanisms or diffusion models, but they're the difference between research that works and research that mysteriously doesn't.
The irony is, in practice, the bugs that waste the most time are the ones that do not crash. Your network trains, loss decreases, but performance plateaus below expectations. It is easy to blame the architecture, the data, or the hyperparameters, while the underlying issue is a subtle numerical problem that degrades gradients just enough to hurt learning without obviously failing.
Here's how to build bulletproof implementations: patterns that are numerically stable, testing strategies that catch bugs early, and monitoring approaches that reveal problems before they become disasters.
Stable Patterns: Computing Without Exploding
The problem with floating point is that it has limited range and precision. In float32, you can represent numbers from about to , but with only 7 decimal digits of precision. Go outside this range and you get infinity. Lose precision through repeated operations and your gradients become noise.
The classic example is softmax. The naive formula looks innocent, but feed it and you're computing , which overflows to infinity. Your network just died.
Log Sum Exp: Stable Computation
We saw this solution earlier for classification, but it's worth repeating because this pattern appears everywhere. The trick: shift by the maximum before exponentiating.
But here's the deeper pattern: when you're computing ratios of exponentials, always work in log space as long as possible:
Cross-Entropy: Never Compute Log of Probability
Another stability pitfall: computing cross entropy as -log(p[target]) where p comes from softmax. If p is tiny (which happens for wrong classes), log(p) can underflow or lose precision. Instead, combine operations:
The Sigmoid-Binary Cross-Entropy Fusion
For binary classification, the naive approach computes sigmoid then binary cross-entropy. This fails for extreme inputs:
Variance and Normalization: The Catastrophic Cancellation Problem
Computing variance as is numerically unstable when the mean is large relative to the standard deviation. You're subtracting two large, similar numbers:
This is why BatchNorm and LayerNorm implementations always center first:
Mixed Precision: Common Pitfalls (when Float16 Kills You)
Modern training often uses float16 for speed and memory savings. But float16 has a tiny range: maximum value around 65,504, minimum normalized value around . Gradients easily overflow or underflow.
The solution is loss scaling: multiply your loss by a large constant before backprop, then divide gradients by the same constant:
Gradient Checking Playbook
You've implemented a complex layer. How do you know the gradients are correct? The answer: systematic numerical gradient checking. But there's an art to doing it right.
The Centered Difference Formula
The basic finite difference formula has error. The centered version has error, much better:
The Gradient Check Protocol
Here's a systematic approach that catches countless bugs:
Choosing the Right Epsilon
The choice of is critical. Too large and you're not approximating the derivative. Too small and floating point errors dominate:
Special Cases That Need Care
Some operations need special handling during gradient checking:
Unit Tests for Layers
Gradient checking catches math errors, but you also need tests for the software engineering aspects: shapes, dtypes, device placement, edge cases.
The Most Common Layer Bugs
After debugging hundreds of custom layers, here are the bugs that waste the most time:
Reproducibility Knobs
When a bug appears intermittently, it's often due to randomness. Making training reproducible is essential for debugging:
But here's the catch: perfect reproducibility has costs:
The pragmatic approach: use reproducibility for debugging, not production:
Monitoring Gradient Health
The best bugs are the ones you catch before they cause problems. Continuous gradient monitoring reveals issues early.
What to Watch For
After training thousands of models, these are the gradient patterns that predict problems:
The dirty truth about deep learning: half of "model doesn't work" issues are numerical problems, not architectural ones. A single unstable operation can cascade into training failure. One layer with wrong gradient implementation can bottleneck your entire network.
But with the patterns and tools covered here (stable implementations, systematic testing, continuous monitoring) you can build bulletproof systems. When your model doesn't train, you'll know exactly where to look. When you implement a new layer, you'll verify it actually works. When numerical issues arise, you'll catch them before they waste weeks of compute.
This isn't the exciting part of deep learning. But it's the difference between research that works reliably and research that mysteriously fails. Get these fundamentals right, and you can focus on the interesting problems instead of debugging numerical disasters.
Interpretability via Gradients: What Your Network Actually Looks At
Throughout this post, we've explored how gradients flow backward to train networks. But here's something not yet covered: those same gradients can tell you what your network is "looking at" when it makes decisions. Not in some abstract mathematical sense, but literally which pixels in an image or words in a sentence drove the prediction.
This is interpretability through gradients, and it's both simpler and more limited than most people realize. The core idea: if changing a pixel would change the output, that pixel matters. The gradient tells you exactly how much. But as we'll see, this local sensitivity isn't the same as importance, and definitely isn't the same as understanding.
Think about it this way: you have a trained network that correctly classifies an image as a dog. You want to know why. The gradient at each input pixel tells you: "if I slightly increased this pixel's intensity, here's how much the dog score would change." Pixels with large gradients have high influence. Visualize these gradients and you get a saliency map, a heat map of influence.
But there's a catch that trips everyone up: the gradient is purely local. It tells you what would happen if you made a tiny change right now, not what would happen if the pixel wasn't there at all, or what role it plays in the broader computation. It's like asking "which pedal affects your speed?" while driving at 60mph. The answer (brake pedal: massive negative effect, gas pedal: small positive effect) tells you about local sensitivity, not about which pedal got you to 60mph in the first place.
Plain Saliency: The Simplest Attribution
The most straightforward approach is to literally visualize the gradient. You already computed it for training; now just look at it:
The results are... messy. Raw gradients are noisy, full of high-frequency patterns that don't correspond to meaningful features. Why? Because modern networks are highly non-linear. Small input changes can cause large output changes, especially near decision boundaries. The gradient captures all of this sensitivity, including noise.
Here's a more fundamental issue: the gradient depends on the current activation state. If a neuron is saturated (like a ReLU that's outputting zero), its gradient is zero, even if that neuron is crucial for the classification. The gradient can't see through dead zones.
The standard fix is to use gradient magnitudes and sometimes smooth them:
But even with these improvements, plain saliency has a fundamental limitation: it only tells you about infinitesimal changes. For finite changes (like removing a pixel or changing it significantly), the linear approximation breaks down. A better approach is needed.
SmoothGrad: Average Out the Noise
Here's an observation that led to a simple but effective improvement: gradient noise is often random, but true signal is consistent. So why not add noise to your input multiple times and average the resulting gradients? This is SmoothGrad:
The math behind why this works is actually straightforward. If the gradient noise is roughly independent across samples (which it often is for high-frequency noise), averaging reduces variance by a factor of :
Meanwhile, the consistent signal (the part of the gradient that represents true features) remains after averaging. It's the same principle as taking multiple measurements in physics to reduce measurement error.
But SmoothGrad is still computing local gradients, just more stable ones. It doesn't solve the saturation problem or the locality issue. For that, a different approach is needed.
Integrated Gradients: The Path Integral Solution
Here's the core problem with local gradients: they only tell you about infinitesimal changes. But what if instead of looking at the gradient at just one point, we integrated gradients along an entire path from a baseline to the input? This is Integrated Gradients, and it's the most principled approach covered here.
The idea: start from a baseline input (like a black image) where the network outputs nearly zero signal. Then gradually interpolate to your actual input, computing gradients at each step. The integral of these gradients gives you the total attribution:
where is your input, is the baseline, and is your model's output for the class of interest.
In code:
Why does this work better? Three reasons:
First, it satisfies completeness: the sum of all attributions equals the difference between the model's output at the input and the baseline. This is literally the fundamental theorem of calculus for path integrals:
This means the attributions fully account for the network's prediction. No signal is lost or created.
Second, it handles saturation. Even if the gradient is zero at your input (saturated ReLU), there might be non-zero gradients along the path. Integrated gradients captures these:
Third, implementation invariance: if two networks compute the same function but with different internal structure, they'll give the same attributions. This isn't true for plain gradients, which depend on the specific activations.
The choice of baseline matters immensely. A black image assumes "absence of signal" is your counterfactual. But you could use:
- Blurred version (what features beyond blur matter?)
- Random noise (what structure matters?)
- Dataset mean (what makes this input special?)
Each baseline asks a different question:
The Limits of Gradient-Based Interpretation
Now for the cold water: gradient-based interpretability is fundamentally limited. Not because of implementation issues, but because of what gradients actually tell you.
Remember: gradients measure local sensitivity, not global importance. Here's a concrete example that breaks most people's intuition:
The problem is that gradients are local. Near a decision boundary, tiny changes cause huge gradient shifts. The saliency map tells you what would change the decision from this exact point, not what features the network learned to recognize.
Here's another limitation: gradient attributions don't compose. If feature A contributes +0.5 to the output and feature B contributes +0.3, their combination might contribute +2.0 due to interactions. Gradients can't capture these non-linear interactions:
Perhaps most importantly, saliency doesn't mean causality. A pixel might have high gradient because it's on a decision boundary, not because it contains meaningful features. The network might be looking at spurious correlations:
So what good are gradient attributions? They're useful for:
- Debugging: Finding obvious failure modes (looking at background not foreground)
- Dataset bias discovery: Revealing spurious correlations
- Model comparison: Different architectures focus on different features
- Hypothesis generation: Not definitive answers
But they're not useful for:
- Proving what the network learned: Local sensitivity isn't global understanding
- Trustworthiness assessment: Reasonable attributions don't mean correct reasoning
- Feature importance ranking: Interactions break additive assumptions
- Causal understanding: Correlation in gradients isn't causation
The right way to think about gradient-based interpretability: it's a flashlight in a dark room. It illuminates local structure around where you're standing (your input), but it doesn't show you the overall architecture of the room (what the network truly learned). It's a useful tool for exploration, not a complete answer for understanding.
Remember that gradients were designed for optimization, not explanation. The fact that they provide any interpretability at all is a bonus. Use them to generate hypotheses about your model's behavior, then test those hypotheses with targeted experiments. Don't mistake the map for the territory: the gradient attribution is not the model's reasoning, just one flawed window into it.
This completes our journey from backpropagation as a training algorithm to gradients as an interpretation tool. The same computational machinery that trains your network can provide insights into its decisions, with all the limitations that local linearization implies. In the end, gradients are just derivatives: they tell you about rates of change, not about the function itself. Use them wisely.
Advanced Topics and Frontiers: Beyond Standard Backprop
We've explored backprop from every angle: as efficient gradient computation, as reverse-mode autodiff, as graph traversal, as the foundation of deep learning. You understand how gradients flow through layers, how they vanish or explode, how to control them with initialization and normalization. You can trace adjoints through any computational graph and implement VJPs for custom operations.
But backprop's story doesn't end with standard neural networks. The same principles scale to programs far more complex than feedforward networks: differential equation solvers, optimization problems, probabilistic programs, even quantum circuits. Once you understand that backprop is just the chain rule applied systematically to any differentiable computation, whole new domains open up.
This final section surveys the frontiers. We won't dive deep into implementation (each topic deserves its own post), but here's how the principles you've learned extend to cutting-edge research. Think of this as a map of where to explore next, with just enough detail to see how everything connects back to what you already know.
Higher-Order Gradients: Differentiating the Differentiator
So far we've focused on first-order gradients: . But what if you need second derivatives? Or gradients of gradients? This isn't academic curiosity. Second-order information enables Newton's method, helps analyze loss landscape geometry, and powers meta-learning algorithms that learn to learn. We explored this a little in the Gradient Descent post, but re-terating here just for posterity.
The insight: computing second derivatives is just applying autodiff to the gradient computation itself. Remember, the backward pass is itself a program. You can differentiate it.
The Hessian-Vector Product Pattern
Computing the full Hessian matrix for parameters is usually infeasible (imagine storing a trillion-element matrix for a million parameters). But you rarely need the full matrix. Usually you want Hessian-vector products: for some vector .
Here's the trick Barak Pearlmutter popularized: differentiate the dot product of the gradient with :
The cost? Roughly 2-3x a single gradient computation. Compare that to gradient computations for the full Hessian. For a million parameters, that's the difference between 3 forward-backward passes and a million.
You can compose this pattern. Want third derivatives? Differentiate the HVP computation. Each level of differentiation just adds another layer to your computational graph:
The key pattern: you never build the full higher-order tensor. You only compute contractions with vectors, keeping everything linear in parameter count.
The Forward-Over-Reverse Pattern
Here's a mind-bending trick: you can mix forward-mode and reverse-mode autodiff. For the Hessian, this gives you the best of both worlds:
This computes the full Hessian in passes, each costing about the same as one gradient. Still expensive for huge , but much better than naive approaches.
Implicit Function Differentiation: Solving Without Unrolling
Here's a problem that breaks standard backprop: what if your forward pass involves solving an equation? For example, finding a fixed point:
You could unroll the iteration and backprop through all 100 steps. But that's memory-intensive and unstable (gradients through long iterative procedures often vanish or explode).
The implicit function theorem offers a shortcut. If satisfies , then:
You differentiate the equilibrium condition, not the path to equilibrium:
This pattern extends to any solver: optimization problems, differential equations, even eigenvalue computations. Instead of differentiating through the algorithm that finds the solution, differentiate the condition that defines it.
DEQs: Deep Equilibrium Models
This idea powers Deep Equilibrium Models (DEQs), which replace stacked layers with a single layer iterated to equilibrium:
One layer, infinite depth. Memory cost of a single layer, expressiveness of a deep network. The catch? Solving for fixed points is slower than forward passes through explicit layers. But for memory-constrained settings, it's a powerful trade-off.
Neural ODEs: When Depth Becomes Continuous
Common Derivative Reference Tables
After all this theory, here's something practical: a reference table of derivatives you'll use constantly. These are organized by pattern rather than alphabetically, so you can see the relationships.
Activation Functions and Their Derivatives
Loss Functions and Their Gradients
Matrix Operations
The Shape Discipline Quick Reference
Throughout this post, we've emphasized that gradients must match parameter shapes. Here's a visual guide to the patterns:
Remember: the gradient's job is to tell you how to update each parameter. If shapes don't match, the update makes no sense. This isn't a quirky implementation detail; it's mathematical necessity.
A Closing Thought: The Gradient Perspective
Phew! You've made it to the end of the trilogy on backprop! After thousands of words on backpropagation, here a reflection summary to keep in view:
Gradients are not magic. They're not learning. They're not intelligence. They're just derivatives: local linear approximations to how changes propagate through a computation. Following these local approximations downhill often finds useful solutions: this practical fact underpins modern deep learning.
This approach has limits. Gradients only see infinitesimal neighborhoods. They can't see around corners, can't escape flat regions without momentum, and can't distinguish correlation from causation. Many failure modes of deep learning (adversarial examples, spurious correlations, catastrophic forgetting) stem from this locality.
Yet despite these limitations, or perhaps because we understand them, gradients have become the foundation of artificial intelligence. Not because they're perfect, but because they're computable. In a world where most optimization problems are intractable, gradients give us a tractable path forward, one local step at a time.
The story of backpropagation is really a story about finding the right abstraction. By recognizing that complex functions are compositions of simple ones, that derivatives obey the chain rule, and that we can systematically accumulate them through any computation graph, we turned an impossible problem (optimize millions of parameters) into a mechanical procedure (forward pass, backward pass, update, repeat).
This is the deeper lesson. The breakthroughs in AI haven't come from making gradients smarter, but from building better functions for them to flow through: convolutions that respect spatial structure, attention that models relationships, normalization that stabilizes flow. The gradient computation stays the same; we just give it better terrain to navigate.
You now understand that rule. You can trace gradients through any computation, implement custom operations, debug training failures, and even extend autodiff to new domains. More importantly, you understand the limitations: what gradients can and cannot tell you about your model.
Take this knowledge and build something. The best way to truly understand backpropagation is to implement it, break it, fix it, and extend it. Start with a toy example, then scale up. You'll discover nuances I didn't cover, encounter numerical issues I didn't mention, and develop intuitions I can't convey in words.
The gradient is your compass in the vast space of possible functions, and backprop is the framework that scales it and makes it meaningful. You may not be a 100% sure of whether you are marching down to the best place, but atleast you use the knowledge of which direction goes downhill to the best of your ability. Sometimes, that's enough to find something remarkable.
References and Further Reading
Papers
- Rumelhart, D. E., Hinton, G. E., & Williams, R. J., Learning Internal Representations by Error Propagation
- Daniel Smilkov et al., SmoothGrad: removing noise by adding noise
- Jie Ren et al., ZeRO-Offload: Democratizing Billion-Scale Model Training
- Mukund Sundararajan et al., Axiomatic Attribution for Deep Networks (Integrated Gradients)
Deep Learning Frameworks
- PyTorch Team, PyTorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration
- Google JAX Team, JAX: Composable transformations of Python+NumPy programs
- TensorFlow Team, TensorFlow: An Open Source Machine Learning Framework for Everyone
Distributed Training & Memory Optimization
- PyTorch Team, FullyShardedDataParallel — PyTorch documentation
- DeepSpeed Team, DeepSpeed: Deep learning optimization library with ZeRO memory optimization stages
Hardware Documentation
Model Interpretability Tools
- PyTorch Team, Captum: Model interpretability and understanding for PyTorch
- SHAP Team, SHAP: SHapley Additive exPlanations
- Google PAIR, Saliency: Framework-agnostic implementation for saliency methods
- Jacob Gildenblat, PyTorch Grad-CAM: Advanced AI explainability for computer vision