Historical Milestones

NoteAbout This Part

Proof-of-Mastery Demonstrations | 6 Milestones | Prerequisites: Vary by milestone

Milestones are runnable recreations of historical ML breakthroughs that use YOUR TinyTorch implementations. Each one validates that the components you built across the modules can reproduce results that once made headlines.

Overview

You’ve spent the modules building a working ML framework — tensors, autograd, layers, optimizers, attention. The milestones answer the only question that matters: does it actually run the experiments that defined the field?

You’ll find out by rebuilding history. Each milestone reproduces a landmark result — Rosenblatt’s Perceptron, the XOR crisis, backpropagation, convolutional networks, transformers, MLPerf — using your code. When the Perceptron learns, it’s running your gradient descent. When attention processes a sequence, it’s running your multi-head attention on top of your transformer block. When CIFAR-10 accuracy climbs past 70%, those are your convolutional layers extracting the features.

That makes these chapters proof — to yourself, and to anyone reading your repo — that the framework you built is the same kind of artifact the original papers shipped on.

The Journey

Table 1 traces the historical milestone timeline and the modules each one requires.

Table 1: Historical milestone timeline and required modules for each.
Year Milestone What You’ll Build Required Modules
1958 Perceptron First neural network (forward pass + training) 01–04, 06–08
1969 XOR Crisis Experience the AI Winter trigger 01–08
1986 MLP Revival Backprop solves XOR + digit recognition 01–08
1998 CNN Revolution Convolutions (70%+ on CIFAR-10) 01–09
2017 Transformers Multi-head attention on a structured sequence task 01–08, 10–13
2018 MLPerf Production optimization pipeline 01–08, 14–18

Why Milestones Transform Learning

You’ll feel the historical struggle. When your single-layer perceptron hits 50% accuracy on XOR and refuses to budge — loss stuck at 0.69, epoch after epoch — you’ll understand in your bones why Minsky’s proof stalled neural-network research for a decade. The AI Winter wasn’t abstract skepticism; it was researchers watching their perceptrons fail in exactly the way yours just did.

You’ll experience the breakthrough. Then you add one hidden layer. Same data, same training loop. Suddenly: 100% accuracy. Loss collapses to zero. You didn’t just read about how depth unlocks non-linear representations — you watched your two-layer network solve what your one-layer network couldn’t. That’s lived experience, not summary.

You’ll build something real. By Milestone 04 you’re done with toy demos. You’re streaming 50,000 natural images through your DataLoader, extracting features with your convolutional layers, and pushing past 70% top-1 accuracy on CIFAR-10 — using a network you wrote line by line, on a framework you wrote module by module.

How to Use Milestones

tito module status

tito milestone run 01

cd milestones/01_1958_perceptron
python 01_rosenblatt_forward.py

Each tinytorch/milestones/NN_yyyy_name/ folder contains:

  • README.md — full historical context and instructions
  • Python scripts — progressive demonstrations (e.g., “see the problem” then “see the solution”)

Learning Philosophy

Module teaches:   HOW  to build the component
Milestone proves: WHAT you can build with it

Modules give you the parts. Milestones force the parts to do real work — the same work that, in each case, moved the field forward.

What’s Next?

Start at the beginning. Open milestones/01_1958_perceptron/, run 01_rosenblatt_forward.py, and watch a single-layer network — built on your tensor, your loss, and your SGD loop — converge on a linearly separable dataset in a handful of epochs. From there the path is chronological: each milestone fails the way the field failed, then succeeds with the idea that broke the impasse.

Build the future by understanding the past.

Back to top