Your GPU can process 5,300 images per second. Your CPU decodes 850.
data
intermediate
Discover that the data pipeline — not the GPU — is often the binding constraint in training. Use DataModel and TransformationModel to find the crossover where CPU preprocessing stalls the accelerator.
The Question
You launch ResNet-50 training on an A100 and watch nvidia-smi. GPU utilization reads 40%. You expected 95%. The model is compute-bound. The hardware is top-tier. Why is your GPU sitting idle 60% of the time?
The answer is almost never the model or the GPU. It is the invisible pipeline upstream: JPEG decoding, random cropping, color jitter, and normalization — all running on the CPU. When the CPU cannot prepare batches fast enough, the GPU starves.
The GPU cannot start until stages 1 and 2 finish. If either is slower than the GPU, the accelerator utilization drops below 100%. This is the data pipeline bottleneck.
2. GPU Compute Time: The Ceiling You Think You Have
We switch from LLM serving (Tutorials 2–3) to CNN training because the data pipeline bottleneck is most visible here. LLM training on tokenized text has a tiny data footprint (~8 MB/s as we will see in Tutorial 11). Image training with JPEG decoding, resizing, and augmentation can demand 10–100× more CPU work per sample — this is where the GPU actually starves.
First, establish how fast the A100 processes a ResNet-50 training step in isolation — no data loading, no preprocessing, just pure compute:
── Storage I/O Check ───────────────────────
Data demand: 10.900 GB / second
Storage supply: 0.00 GB / second
Utilization: inf%
Is stalled: True
Storage I/O is fine — modern NVMe SSDs can deliver multi-GB/s easily. The bottleneck is not reading the bytes. It is transforming them.
4. The Reveal: CPU Preprocessing Is the Wall
Even with fast storage, the CPU must decode JPEGs, apply random crops, color jitter, and normalization. A typical CPU worker processes ImageNet images at ~250 MB/s. With 8 workers, total CPU throughput is ~2 GB/s:
from mlsysim import TransformationModeltransform_solver = TransformationModel()cpu_throughput = Q_("2 GB/s") # 8 workers x 250 MB/s eacht = transform_solver.solve( batch_size=256, sample_size_bytes=sample_size, cpu_throughput=cpu_throughput, accelerator_step_time=profile.latency)info("CPU vs GPU Pipeline", CPU_transform_time=t.transform_time, GPU_step_time=t.accelerator_step_time, CPU_is_bottleneck=t.is_bottleneck, GPU_utilization=f"{t.accelerator_utilization:.1%}", Slowdown_factor=f"{t.slowdown_factor:.2f}x")
── CPU vs GPU Pipeline ─────────────────────
CPU transform time: 64 ms
GPU step time: 11.74 ms
CPU is bottleneck: True
GPU utilization: 18.3%
Slowdown factor: 5.45x
ImportantKey Insight
The binding constraint is not silicon — it is JPEG decoding on the CPU. The data pipeline (Wall 9: Transformation) becomes the bottleneck before the GPU (Wall 1: Compute). Your GPU can process 5,300+ images per second, but your 8 CPU workers can only prepare ~850. The GPU sits idle waiting for data. This is why production training pipelines use GPU-accelerated preprocessing (NVIDIA DALI), pre-decoded datasets, or aggressive prefetching.
5. Batch Size Sweep: Finding the Crossover
Let’s sweep batch sizes to find exactly where the CPU becomes the binding constraint. At small batches, the GPU is slower and data arrives in time. At large batches, the GPU becomes more efficient but the CPU falls behind:
Batch GPU Step CPU Xform Binding GPU Util
────────────────────────────────────────────────────
32 5.86 ms 8.00 ms Transformation 73.2%
64 6.70 ms 16.00 ms Transformation 41.9%
128 8.38 ms 32.00 ms Transformation 26.2%
256 11.74 ms 64.00 ms Transformation 18.3%
512 18.47 ms 128.00 ms Transformation 14.4%
1024 31.93 ms 256.00 ms Transformation 12.5%
Watch the crossover: at small batch sizes the GPU is the bottleneck (100% utilization). As batch size grows, CPU preprocessing time grows linearly while GPU step time grows sub-linearly. Eventually Wall 9 becomes the binding constraint and GPU utilization drops.
6. The Fix: Adding CPU Workers
The simplest fix for a CPU bottleneck is more workers. Let’s compare 8 vs. 16 vs. 32:
Doubling workers doubles throughput — but you eventually hit either storage I/O limits (Wall 8) or PCIe bandwidth. The takeaway: always check all three stages of the pipeline.
Your Turn
CautionExercises
Exercise 1: Predict before you compute. At batch size 64 with 8 CPU workers (2 GB/s total), will ResNet-50 training on the A100 be GPU-bound or CPU-bound? Write your prediction, then run the code. What determines the answer? (Hint: compare transform_time vs. accelerator_step_time.)
Exercise 2: Medical imaging — larger samples. Medical imaging uses images 10x larger than ImageNet (~5 MB per sample). Change sample_size to Q_("5 MB") and re-run the batch size sweep. At what batch size does the CPU stall the GPU now? How many workers would you need to keep up at batch 256?
Exercise 3: GPU-accelerated preprocessing. If you use NVIDIA DALI to move preprocessing to the GPU, the CPU bottleneck effectively disappears. Model this by setting cpu_throughput = Q_("50 GB/s"). Run the sweep again. Does the bottleneck shift back to compute? What is the new GPU utilization at batch 512?
Self-check: If the GPU step takes 20 ms and CPU preprocessing takes 35 ms, what is the accelerator utilization? (Answer: 20/35 = 57%.)
Key Takeaways
TipSummary
Data pipelines have three stages: storage I/O, CPU preprocessing, and GPU compute — the slowest determines throughput
CPU preprocessing (Wall 9) is the most common bottleneck: JPEG decode, augmentation, and normalization are all CPU-bound
Batch size shifts the binding constraint: small batches are GPU-bound; large batches often become CPU-bound
Adding CPU workers helps linearly but has diminishing returns when storage I/O becomes the limit
Always check all three stages before concluding that the GPU is the bottleneck