Module 15: Quantization#

Module Info

OPTIMIZATION TIER | Difficulty: ●●●○ | Time: 4-6 hours | Prerequisites: 01-14

Prerequisites: Modules 01-14 means you should have:

  • Built the complete foundation (Tensor through Training)

  • Implemented profiling tools to measure memory usage

  • Understanding of neural network parameters and forward passes

  • Familiarity with memory calculations and optimization trade-offs

If you can profile a model’s memory usage and understand the cost of FP32 storage, you’re ready.

Overview#

Modern neural networks face a memory wall problem. A BERT model requires 440 MB, GPT-2 needs 6 GB, and GPT-3 demands 700 GB, yet mobile devices have only 4-8 GB of RAM. The culprit? Every parameter uses 4 bytes of FP32 precision, representing values with 32-bit accuracy when 8 bits often suffice. Quantization solves this by converting FP32 weights to INT8, achieving 4× memory reduction with less than 1% accuracy loss.

In this module, you’ll build a production-quality INT8 quantization system. You’ll implement the core quantization algorithm, create quantized layer classes, and develop calibration techniques that optimize quantization parameters for minimal accuracy degradation. By the end, you’ll compress entire neural networks from hundreds of megabytes to a fraction of their original size, enabling deployment on memory-constrained devices.

This isn’t just academic compression. Your implementation uses the same symmetric quantization approach deployed in TensorFlow Lite, PyTorch Mobile, and ONNX Runtime, making models small enough to run on phones, IoT devices, and edge hardware without cloud connectivity.

Learning Objectives#

Tip

By completing this module, you will:

  • Implement INT8 quantization with symmetric scaling and zero-point calculation for 4× memory reduction

  • Master calibration techniques that optimize quantization parameters using sample data distributions

  • Understand quantization error propagation and accuracy preservation strategies in compressed models

  • Connect your implementation to production frameworks like TensorFlow Lite and PyTorch quantization APIs

  • Analyze memory-accuracy trade-offs across different quantization strategies and model architectures

What You’ll Build#

        flowchart TB
    subgraph "Quantization System"
        A["quantize_int8()<br/>FP32 → INT8 conversion"]
        B["dequantize_int8()<br/>INT8 → FP32 restoration"]
        C["QuantizedLinear<br/>Quantized layer class"]
        D["quantize_model()<br/>Full network quantization"]
        E["Calibration<br/>Parameter optimization"]
    end

    A --> C
    B --> C
    C --> D
    E --> D

    style A fill:#e1f5ff
    style B fill:#fff3cd
    style C fill:#f8d7da
    style D fill:#d4edda
    style E fill:#e2d5f1
    

Fig. 24 Quantization System#

Implementation roadmap:

Step

What You’ll Implement

Key Concept

1

quantize_int8()

Scale and zero-point calculation, INT8 mapping

2

dequantize_int8()

FP32 restoration with quantization parameters

3

QuantizedLinear

Quantized linear layer with compressed weights

4

calibrate()

Input quantization optimization using sample data

5

quantize_model()

Full model conversion and memory comparison

The pattern you’ll enable:

# Compress a 400MB model to 100MB
quantize_model(model, calibration_data=sample_inputs)
# Now model uses 4× less memory with <1% accuracy loss

What You’re NOT Building (Yet)#

To keep this module focused, you will not implement:

  • Per-channel quantization (PyTorch supports this for finer-grained precision)

  • Mixed precision strategies (keeping sensitive layers in FP16/FP32)

  • Quantization-aware training (Module 16: Compression covers this)

  • INT8 GEMM kernels (production uses specialized hardware instructions like VNNI)

You are building symmetric INT8 quantization. Advanced quantization schemes come in production frameworks.

API Reference#

This section provides a quick reference for the quantization functions and classes you’ll build. Use this as your guide while implementing and debugging.

Core Functions#

quantize_int8(tensor: Tensor) -> Tuple[Tensor, float, int]

Convert FP32 tensor to INT8 with calculated scale and zero-point.

dequantize_int8(q_tensor: Tensor, scale: float, zero_point: int) -> Tensor

Restore INT8 tensor to FP32 using quantization parameters.

QuantizedLinear Class#

Method

Signature

Description

__init__

__init__(linear_layer: Linear)

Create quantized version of Linear layer

calibrate

calibrate(sample_inputs: List[Tensor])

Optimize input quantization using sample data

forward

forward(x: Tensor) -> Tensor

Compute output with quantized weights

memory_usage

memory_usage() -> Dict[str, float]

Calculate memory savings achieved

Model Quantization#

Function

Signature

Description

quantize_model

quantize_model(model, calibration_data=None)

Quantize all Linear layers in-place

compare_model_sizes

compare_model_sizes(original, quantized)

Measure compression ratio and memory saved

Core Concepts#

This section covers the fundamental ideas behind quantization. Understanding these concepts will help you implement efficient model compression and debug quantization errors.

Precision and Range#

Neural networks use FP32 (32-bit floating point) by default, which can represent approximately 4.3 billion unique values across a vast range from 10⁻³⁸ to 10³⁸. This precision is overkill for most inference tasks. Research shows that neural network weights typically cluster in a narrow range like [-3, 3] after training, and networks are naturally robust to small perturbations due to their continuous optimization.

INT8 quantization maps this continuous FP32 range to just 256 discrete values (from -128 to 127). The key insight is that we can preserve model accuracy by carefully choosing how to map these 256 levels across the actual range of values in each tensor. A tensor with values in [-0.5, 0.5] needs different quantization parameters than one with values in [-10, 10].

Consider the storage implications. A single FP32 parameter requires 4 bytes, while INT8 uses 1 byte. For a model with 100 million parameters, this is the difference between 400 MB (FP32) and 100 MB (INT8). The 4× compression ratio is consistent across all model sizes because we’re always reducing from 32 bits to 8 bits per value.

Quantization Schemes#

Symmetric quantization uses a linear mapping where FP32 zero maps to INT8 zero (zero-point = 0). This simplifies hardware implementation and works well for weight distributions centered around zero. Asymmetric quantization allows the zero-point to shift, better capturing ranges like [0, 1] or [-1, 3] where the distribution is not symmetric.

Your implementation uses asymmetric quantization for maximum flexibility:

def quantize_int8(tensor: Tensor) -> Tuple[Tensor, float, int]:
    """Quantize FP32 tensor to INT8 using asymmetric quantization."""
    data = tensor.data

    # Step 1: Find dynamic range
    min_val = float(np.min(data))
    max_val = float(np.max(data))

    # Step 2: Handle edge case (constant tensor)
    if abs(max_val - min_val) < EPSILON:
        scale = 1.0
        zero_point = 0
        quantized_data = np.zeros_like(data, dtype=np.int8)
        return Tensor(quantized_data), scale, zero_point

    # Step 3: Calculate scale and zero_point
    scale = (max_val - min_val) / (INT8_RANGE - 1)
    zero_point = int(np.round(INT8_MIN_VALUE - min_val / scale))
    zero_point = int(np.clip(zero_point, INT8_MIN_VALUE, INT8_MAX_VALUE))

    # Step 4: Apply quantization formula
    quantized_data = np.round(data / scale + zero_point)
    quantized_data = np.clip(quantized_data, INT8_MIN_VALUE, INT8_MAX_VALUE).astype(np.int8)

    return Tensor(quantized_data), scale, zero_point

The algorithm finds the minimum and maximum values in the tensor, then calculates a scale that maps this range to [-128, 127]. The zero-point determines which INT8 value represents FP32 zero, ensuring minimal quantization error at zero (important for ReLU activations and sparse patterns).

Scale and Zero-Point#

The scale parameter determines how large each INT8 step is in FP32 space. A scale of 0.01 means each INT8 increment represents 0.01 in the original FP32 values. Smaller scales provide finer precision but can only represent a narrower range; larger scales cover wider ranges but sacrifice precision.

The zero-point is an integer offset that shifts the quantization range. For a symmetric distribution like [-2, 2], the zero-point is 0, mapping FP32 zero to INT8 zero. For an asymmetric range like [-1, 3], the zero-point might be 64, ensuring the quantization levels are distributed optimally across the actual data range.

Here’s how dequantization reverses the process:

def dequantize_int8(q_tensor: Tensor, scale: float, zero_point: int) -> Tensor:
    """Dequantize INT8 tensor back to FP32."""
    dequantized_data = (q_tensor.data.astype(np.float32) - zero_point) * scale
    return Tensor(dequantized_data)

The formula (quantized - zero_point) × scale inverts the quantization mapping. If you quantized 2.5 to INT8 value 85 with scale 0.02 and zero-point 60, dequantization computes (85 - 60) × 0.02 = 0.5. The round-trip isn’t perfect due to quantization being lossy compression, but the error is bounded by the scale value.

Post-Training Quantization#

Post-training quantization converts a pre-trained FP32 model to INT8 without retraining. This is the approach your implementation uses. The QuantizedLinear class wraps existing Linear layers, quantizing their weights and optionally their inputs:

class QuantizedLinear:
    """Quantized version of Linear layer using INT8 arithmetic."""

    def __init__(self, linear_layer: Linear):
        """Create quantized version of existing linear layer."""
        self.original_layer = linear_layer

        # Quantize weights
        self.q_weight, self.weight_scale, self.weight_zero_point = quantize_int8(linear_layer.weight)

        # Quantize bias if it exists
        if linear_layer.bias is not None:
            self.q_bias, self.bias_scale, self.bias_zero_point = quantize_int8(linear_layer.bias)
        else:
            self.q_bias = None
            self.bias_scale = None
            self.bias_zero_point = None

        # Store input quantization parameters (set during calibration)
        self.input_scale = None
        self.input_zero_point = None

The forward pass dequantizes weights on-the-fly, performs FP32 matrix multiplication, and returns FP32 outputs. This educational approach makes the code simple to understand, though production implementations use INT8 GEMM (general matrix multiply) operations for speed:

def forward(self, x: Tensor) -> Tensor:
    """Forward pass with quantized computation."""
    # Dequantize weights
    weight_fp32 = dequantize_int8(self.q_weight, self.weight_scale, self.weight_zero_point)

    # Perform computation (same as original layer)
    result = x.matmul(weight_fp32)

    # Add bias if it exists
    if self.q_bias is not None:
        bias_fp32 = dequantize_int8(self.q_bias, self.bias_scale, self.bias_zero_point)
        result = Tensor(result.data + bias_fp32.data)

    return result

Calibration Strategy#

Calibration is the process of finding optimal quantization parameters by analyzing sample data. Without calibration, generic quantization parameters may waste precision or clip important values. The calibration method in QuantizedLinear runs sample inputs through the layer and collects statistics:

def calibrate(self, sample_inputs: List[Tensor]):
    """Calibrate input quantization parameters using sample data."""
    # Collect all input values
    all_values = []
    for inp in sample_inputs:
        all_values.extend(inp.data.flatten())

    all_values = np.array(all_values)

    # Calculate input quantization parameters
    min_val = float(np.min(all_values))
    max_val = float(np.max(all_values))

    if abs(max_val - min_val) < EPSILON:
        self.input_scale = 1.0
        self.input_zero_point = 0
    else:
        self.input_scale = (max_val - min_val) / (INT8_RANGE - 1)
        self.input_zero_point = int(np.round(INT8_MIN_VALUE - min_val / self.input_scale))
        self.input_zero_point = np.clip(self.input_zero_point, INT8_MIN_VALUE, INT8_MAX_VALUE)

Calibration typically requires 100-1000 representative samples. Too few samples might miss important distribution characteristics; too many waste time with diminishing returns. The goal is capturing the typical range of activations the model will see during inference.

Production Context#

Your Implementation vs. PyTorch#

Your quantization system implements the core algorithms used in production frameworks. The main differences are in scale (production supports many quantization schemes) and performance (production uses INT8 hardware instructions).

Feature

Your Implementation

PyTorch Quantization

Algorithm

Asymmetric INT8 quantization

Multiple schemes (INT8, INT4, FP16, mixed)

Calibration

Min/max statistics

MinMax, histogram, percentile observers

Backend

NumPy (FP32 compute)

INT8 GEMM kernels (FBGEMM, QNNPACK)

Speed

1x (baseline)

2-4× faster with INT8 ops

Memory

4× reduction

4× reduction (same compression)

Granularity

Per-tensor

Per-tensor, per-channel, per-group

Code Comparison#

The following comparison shows quantization in TinyTorch versus PyTorch. The APIs are remarkably similar, reflecting the universal nature of the quantization problem.

from tinytorch.perf.quantization import quantize_model, QuantizedLinear
from tinytorch.core.layers import Linear, Sequential

# Create model
model = Sequential(
    Linear(784, 128),
    Linear(128, 10)
)

# Quantize to INT8
calibration_data = [sample_batch1, sample_batch2, ...]
quantize_model(model, calibration_data)

# Use quantized model
output = model.forward(x)  # 4× less memory!
import torch
import torch.quantization as quantization

# Create model
model = torch.nn.Sequential(
    torch.nn.Linear(784, 128),
    torch.nn.Linear(128, 10)
)

# Quantize to INT8
model.qconfig = quantization.get_default_qconfig('fbgemm')
model_prepared = quantization.prepare(model)
# Run calibration
for batch in calibration_data:
    model_prepared(batch)
model_quantized = quantization.convert(model_prepared)

# Use quantized model
output = model_quantized(x)  # 4× less memory!

Let’s walk through the key differences:

  • Line 1-2 (Import): TinyTorch uses quantize_model() function; PyTorch uses torch.quantization module with prepare/convert API.

  • Lines 4-7 (Model creation): Both create identical model architectures. The layer APIs are the same.

  • Lines 9-11 (Quantization): TinyTorch uses one-step quantize_model() with calibration data. PyTorch uses three-step API: configure (qconfig), prepare (insert observers), convert (replace with quantized ops).

  • Lines 13 (Calibration): TinyTorch passes calibration data as argument; PyTorch requires explicit calibration loop with forward passes.

  • Lines 15-16 (Inference): Both use standard forward pass. The quantized weights are transparent to the user.

Tip

What’s Identical

The core quantization mathematics: scale calculation, zero-point mapping, INT8 range clipping. When you debug PyTorch quantization errors, you’ll understand exactly what’s happening because you implemented the same algorithms.

Why Quantization Matters at Scale#

To appreciate why quantization is critical for production ML, consider these deployment scenarios:

  • Mobile AI: iPhone has 6 GB RAM shared across all apps. A quantized BERT (110 MB) fits comfortably; FP32 version (440 MB) causes memory pressure and swapping.

  • Edge computing: IoT devices often have 512 MB RAM. Quantization enables on-device inference for privacy-sensitive applications (medical devices, security cameras).

  • Data centers: Serving 1000 requests/second requires multiple model replicas. With 4× memory reduction, you fit 4× more models per GPU, reducing serving costs by 75%.

  • Battery life: INT8 operations consume 2-4× less energy than FP32 on mobile processors. Quantized models drain battery slower, improving user experience.

Check Your Understanding#

Test your quantization knowledge with these systems thinking questions. They’re designed to build intuition for memory, precision, and performance trade-offs.

Q1: Memory Calculation

A neural network has three Linear layers: 784→256, 256→128, 128→10. How much memory do the weights consume in FP32 vs INT8? Include bias terms.

Q2: Quantization Error Bound

For FP32 weights uniformly distributed in [-0.5, 0.5], what is the maximum quantization error after INT8 quantization? What is the signal-to-noise ratio in decibels?

Q3: Calibration Strategy

You’re quantizing a model for deployment. You have 100,000 calibration samples available. How many should you use, and why? What’s the trade-off?

Q4: Memory Bandwidth Impact

A model has 100M parameters. Loading from SSD to RAM at 500 MB/s, how long does loading take for FP32 vs INT8? How does this affect user experience?

Q5: Hardware Acceleration

Modern CPUs have AVX-512 VNNI instructions that can perform INT8 matrix multiply. How many INT8 operations fit in one 512-bit SIMD register vs FP32? Why might actual speedup be less than this ratio?

Further Reading#

For students who want to understand the academic foundations and production implementations of quantization:

Seminal Papers#

  • Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference - Jacob et al. (2018). The foundational paper for symmetric INT8 quantization used in TensorFlow Lite. Introduces quantized training and deployment. arXiv:1712.05877

  • Mixed Precision Training - Micikevicius et al. (2018). NVIDIA’s approach to training with FP16/FP32 mixed precision, reducing memory and increasing speed. Concepts extend to INT8 quantization. arXiv:1710.03740

  • Data-Free Quantization Through Weight Equalization and Bias Correction - Nagel et al. (2019). Techniques for quantizing models without calibration data, using statistical properties of weights. arXiv:1906.04721

  • ZeroQ: A Novel Zero Shot Quantization Framework - Cai et al. (2020). Shows how to quantize models without any calibration data by generating synthetic inputs. arXiv:2001.00281

Additional Resources#

  • Blog post: “Quantization in PyTorch” - Official PyTorch quantization tutorial covering eager mode and FX graph mode quantization

  • Documentation: TensorFlow Lite Post-Training Quantization - Production quantization techniques for mobile deployment

  • Survey: “A Survey of Quantization Methods for Efficient Neural Network Inference” - Gholami et al. (2021) - Comprehensive overview of quantization research. arXiv:2103.13630

What’s Next#

See also

Coming Up: Module 16 - Compression

Implement model pruning and weight compression techniques. You’ll build structured pruning that removes entire neurons and channels, achieving 2-10× speedup by reducing computation, not just memory.

Preview - How Quantization Combines with Future Techniques:

Module

What It Does

Quantization In Action

16: Compression

Prune unnecessary weights

quantize_model(pruned_model) → 16× total compression

18: Acceleration

Optimize kernel fusion

accelerate(quantized_model) → 8× faster inference

20: Capstone

Deploy optimized models

Full pipeline: prune → quantize → accelerate → deploy

Get Started#

Tip

Interactive Options

Warning

Save Your Progress

Binder and Colab sessions are temporary. Download your completed notebook when done, or clone the repository for persistent local work.