Track C — Complete from Scratch

For someone with limited or no ML background, starting from solid software engineering or even from beginner Python. The whole 15-stage path, sequenced for steady progress without burnout.

Time: 24–36 weeks at ~10 hours/week. Faster if you do more; slower is fine. Endpoint: you can build, ship, and operate AI products, and have a defensible foundation to specialize from.

This track has no shortcuts. That’s a feature.


Prerequisites

  • Comfortable with Python at a “I can write a CLI tool” level. If not, do Automate the Boring Stuff with Python first (~2 weeks).
  • Comfortable with command line, git, virtualenv.
  • A laptop. A GPU is helpful for some weeks but not required.
  • One Python environment per project (venv, uv, or pixi).

Phase 1 — Foundations (weeks 1–8)

The math + classical ML. Investment that pays for years.

Week 1 — Linear algebra

  • Stage 1 — README, Linear algebra.
  • Watch 3Blue1Brown’s Essence of Linear Algebra (free, YouTube).
  • Code each operation in NumPy.
  • Exercise: implement matmul from scratch in pure Python; verify against NumPy.

Week 2 — Probability & info theory

Week 3 — Calculus & optimization

  • Calculus & optimization.
  • Watch 3Blue1Brown’s Essence of Calculus.
  • Exercise: implement gradient descent on (x − 3)² from scratch in NumPy. Plot the loss curve.

Week 4 — Supervised learning

Week 5 — Loss, optimization, eval

Week 6 — Unsupervised + classical depth

  • Unsupervised learning.
  • Hands-on: k-means, PCA, UMAP on a real dataset.
  • Exercise: UMAP-project 1k MNIST images to 2D. Plot, color by class. Notice the clusters.

Week 7 — Neural networks intro

Week 8 — NN depth

End of Phase 1 milestone: you understand neural networks well enough to read a paper’s method section.


Phase 2 — Language Modeling & Transformers (weeks 9–14)

The core of modern AI.

Week 9 — Language modeling

  • Stage 4 — All articles.
  • Exercise: train a character-level LSTM on TinyShakespeare. Generate text. Notice the long-range failure mode.

Week 10 — Tokens & embeddings

  • Stage 5 — All articles.
  • Exercise: embed 1k product reviews with sentence-transformers; for a query, find its 5 nearest neighbors.

Week 11 — Attention

Week 12 — Transformer mechanics

Week 13 — GPT from scratch

  • GPT from scratch.
  • Run Karpathy’s nanoGPT alongside.
  • Exercise: train a 6-layer GPT on TinyShakespeare. Generate plausible Shakespeare.

Week 14 — Modern LLMs

  • Stage 7 — All articles.
  • Exercise: plot training-FLOPs vs val-loss for your nanoGPT at 3 sizes. Eyeball your scaling law.

End of Phase 2 milestone: you can draw the transformer block on a whiteboard from memory.


Phase 3 — LLM Application Engineering (weeks 15–22)

Now the practical layer.

Week 15 — Prompting

  • Stage 8 — All articles.
  • Sign up for an API (Anthropic / OpenAI / Gemini).
  • Exercise: make 100 API calls. Build a CLI tool with streaming. Force JSON output 100/100 times.

Week 16 — RAG basics

Week 17 — RAG depth

Week 18 — Fine-tuning intro

Week 19 — LoRA + DPO

Week 20 — Agents

Week 21 — Agent depth

Week 22 — Multimodal

  • Stage 12 — All articles.
  • Exercise: “search my photos by description” feature. CLIP or sentence-transformers multimodal model.

End of Phase 3 milestone: you can build any common AI feature.


Phase 4 — Production & Specialization (weeks 23–32)

Shipping discipline + your direction.

Week 23 — Deployment

Week 24 — Evals

Week 25 — Guardrails

Week 26 — Observability

Week 27 — Enterprise + applications

Weeks 28–30 — Specialization sprint

Pick a stage to go deep on, based on what you’d want to do for work:

  • RAG specialist: fine-tune embedders, multi-hop, GraphRAG.
  • Agent specialist: multi-agent systems, browser agents, code agents.
  • Fine-tuning specialist: domain-tuned LoRAs, DPO / GRPO.
  • Production specialist: serving, observability, eval frameworks.
  • Multimodal specialist: VLMs, video, audio.
  • Vertical specialist: medical, legal, financial, code.

Read the relevant stages deeply; build something representative.

Weeks 31–32 — Career + ship

  • Stage 15 — All articles.
  • Pick one of:
    • A polished portfolio project on GitHub + writeup.
    • A series of 3 technical blog posts on what you learned.
    • An open-source contribution to a project you care about.

End of Phase 4 milestone: you have public work to point to and a clear next move.


Phase 5 — Going beyond (weeks 33–36+)

Now you’re past the textbook.

  • Read papers weekly (3–5 abstracts; deeply on the 1–2 that matter to you).
  • Reproduce a paper a month.
  • Build a real product — actual users, real feedback.
  • Apply to roles that match your specialization.
  • Contribute to open-source in your area.
  • Mentor someone behind you — teaching is the best test of understanding.

The path doesn’t end. It widens.


Tips for sustainability

  • One hour at a time. A daily hour beats a weekly binge.
  • Build alongside reading. Forget 80% of what you read; remember 80% of what you build.
  • Public progress. GitHub, blog, even a notes folder. Future-you will thank you.
  • Don’t skip ahead. The path is sequenced for a reason.
  • Don’t get stuck. If a stage feels impossible, take a break, build something, come back.
  • Talk to people. Communities, Discord servers, meetups. Other learners and mentors.

When to take a different track

If at week 8 you find yourself craving math / NN depth more than building products, switch to Track B (you have the foundations). If at week 12 you realize you’d rather just ship things, switch to Track A (do RAG and agents first, come back to transformer internals later).

Tracks are guides, not jails.

See also