🚧 DEVELOPMENT PREVIEW - Built from dev@67824170 • 2026-04-09 20:48 EDT • Stable version →
🧮 MLSys·im: First-principles ML systems modeling. Get started →
📚 Textbook: Read the ML Systems book. Explore →
Open Source · Companion to mlsysbook.ai
MLSYSIM
Predict ML system performance, cost, and carbon.
From first principles.
Reason about ML workloads — from microcontrollers to GPU clusters — without provisioning any hardware.
pip install mlsysim
import mlsysim
from mlsysim import Engine
profile = Engine.solve(
model = mlsysim.Models.ResNet50,
hardware = mlsysim.Hardware.Cloud.A100,
batch_size = 1,
precision = "fp16"
)
print(f"Bottleneck: {profile.bottleneck}") # → Memory Bound
print(f"Latency: {profile.latency.to('ms'):~.2f}") # → 0.34 ms
print(f"Throughput: {profile.throughput:.0f} img/s") # → 2941 img/sAt batch=1, ResNet-50 loads ~50 MB of weights but performs only ~8 GFLOPs, making it firmly memory-bound on any modern GPU. The solver identifies this in microseconds using the Iron Law [1]:
\[T = \max\!\left(\frac{\text{FLOPs}}{\text{Peak} \times \eta},\ \frac{\text{Bytes}}{\text{BW}}\right)\]
Every solver takes typed registry objects and returns analytically grounded estimates. No benchmarking required.
Roofline Analysis Compute vs. memory bottleneck identification using the Iron Law. Single-node latency and throughput. Tutorial: Hello Roofline
3D Parallelism Data, tensor, and pipeline parallel scaling efficiency. Ring all-reduce and pipeline bubble overhead. Tutorial: Scaling to 1000 GPUs
LLM Serving Time-to-first-token (TTFT), inter-token latency (ITL), and KV-cache memory pressure. Tutorial: Two Phases of Inference
Total Cost of Ownership CapEx, OpEx, electricity, maintenance, and per-query economics over any time horizon. Tutorial: The $9M Question
Sustainability Energy, carbon footprint (kg CO2e), and water usage across datacenter regions. Tutorial: Geography Matters
Reliability Fleet MTBF, failure probability, and Young-Daly optimal checkpoint interval. Tutorial: Sensitivity Analysis
Beginner
Memory-bound vs. compute-bound in 5 lines of Python. Sweep batch sizes and see the roofline crossover.
Beginner
Why most LLM inference is memory-bound, not compute-bound. Visualize the gap between peak FLOP/s and bandwidth.
Intermediate
Pre-fill is compute-bound, decode is memory-bound. Model both phases and diagnose KV-cache pressure.
Advanced
Ring all-reduce communication, pipeline bubbles, and scaling efficiency on distributed GPU clusters.
MLSYSIM is the computational backbone for the Machine Learning Systems lecture slides: 35 Beamer decks, 1,099 slides, and 266 original SVG diagrams. Each solver maps directly to one or more slide decks so students can move between the analytical engine and lecture material.
17 Decks
The full single-machine ML stack: data engineering, neural computation, training, compression, hardware acceleration, and serving. 570 slides, 141 SVGs. Download All PDFs
18 Decks
Distributed infrastructure: compute clusters, network fabrics, distributed training, fault tolerance, fleet orchestration, inference at scale, and sustainability. 529 slides, 125 SVGs. Download All PDFs
Tutorial
Full-day tutorial designed for ISCA / ASPLOS / MLSys. Covers the Iron Law, the 5-layer stack, live MLSYSIM demos from single-node roofline to fleet-scale carbon analysis.
All slides include speaker notes, timing guidance, and 8–11 active learning exercises per deck. See the Teaching Guide for semester plans and customization instructions.
Build intuition for why ML systems behave as they do. Run roofline analysis, see the memory wall, compute carbon footprints — all without needing GPU hardware. See learning path →
Assign analytically grounded problem sets with deterministic, reproducible outputs. Pair MLSYSIM exercises with 35 ready-to-teach Beamer slide decks — each with speaker notes and active learning prompts. See course integration →
Pre-deployment estimates for any architecture. Model distributed overheads, LLM serving latency, and multi-region sustainability before provisioning hardware. See quick API guide →
If you use MLSYSIM, the companion slides, or the textbook in coursework or research, please cite:
The slide decks, MLSYSIM engine, and interactive labs are all part of the same open-source ecosystem. View all resources on GitHub.