Tiny Machine Learning (TinyML)

10-Week Course: From ML Fundamentals to Embedded Deployment

Course Overview

Courseware TinyML Slides & Readings (178 slide decks, 127 readings)
Textbook Volume I: Introduction to Machine Learning Systems (companion reference)
Duration 10–12 weeks (20–24 lectures at 75 min each)
Prerequisites Programming (Python), basic linear algebra
Scope ML on resource-constrained embedded devices (microcontrollers, edge)
Hardware Arduino Nicla Vision or equivalent TinyML kit
edX Course HarvardX Professional Certificate in TinyML

Course Goal: Students learn to design, train, and deploy machine learning models on microcontrollers. By the end, they will have deployed keyword spotting, visual wake words, and gesture recognition applications on real embedded hardware.

NoteRelationship to ML Systems

TinyML is to ML Systems what Embedded OS is to Operating Systems — the same core principles applied under extreme resource constraints. This course pairs naturally with AI Systems Foundations and shares textbook chapters on compression, hardware acceleration, and deployment.

TipCourseware

All slide decks and readings are available at mlsysbook.ai/slides/tinyml.html. File numbers (e.g., “§3.5 Slide 1”) correspond to the PDF naming convention (3-5-1.pdf).


Part I: Fundamentals (Weeks 1–3)

Goal: Build a working understanding of ML, from pattern recognition to CNNs.

Week 1: Welcome to TinyML

Component Assignment
Slides Ch 1.1–1.3: Course overview, TinyML vision, challenges
Read Ch 1.2 case studies, Ch 1.3 “Why the Future of ML is Tiny”
Textbook Introduction (Vol I Ch 1)
Due Background quiz, forum introduction

Learning Objectives: Define TinyML and articulate why it matters. Identify the key constraints (memory, power, latency) that distinguish TinyML from cloud ML. Describe 3+ real-world TinyML applications.

TipInstructor Tip

Start class with a live demo: run a keyword spotting model on an Arduino. Students see ML on a microcontroller before they learn any theory. This sets the “why” for the entire course.

Week 2: The Machine Learning Paradigm

Component Assignment
Slides Ch 2.1–2.2: ML paradigm, deep learning building blocks
Read Ch 2.1 neural network case studies, Ch 2.2 initialization and coding realities
Colab Exploring Loss, Gradient Descent, First Neural Network
Textbook ML Workflow (Vol I Ch 3)
Due Neural Network coding assignment

Learning Objectives: Explain the ML paradigm (patterns, loss, optimization). Implement a simple neural network in TensorFlow. Distinguish training, validation, and test data and explain why the split matters.

Week 3: CNNs and Computer Vision

Component Assignment
Slides Ch 2.3–2.5: Convolutions, CNNs, computer vision, responsible AI
Read Ch 2.3 feature mapping, Ch 2.4 overfitting and regularization
Colab Filters, Fashion MNIST, Image Augmentation
Textbook Neural Computation (Vol I Ch 5)
Due CNN coding assignment, Responsible AI forum

Learning Objectives: Explain how convolutions extract features from images. Build a CNN for image classification. Apply data augmentation and dropout to mitigate overfitting. Articulate ethical considerations in AI system design.


Part II: Applications (Weeks 4–6)

Goal: Apply ML to real TinyML use cases — audio, vision, and anomaly detection.

Week 4: TensorFlow Lite and Quantization

Component Assignment
Slides Ch 3.1–3.4: TFLite, optimization, quantization (PTQ and QAT)
Read Ch 3.3 TFLite models, Ch 3.4 “Why 8-bits are enough”
Colab TFLite Converter, Running TFLite Models, Quantization
Textbook Model Compression (Vol I Ch 10)
Due Quantization assignment

Learning Objectives: Convert a TensorFlow model to TensorFlow Lite. Explain post-training quantization and quantization-aware training. Measure accuracy-size trade-offs for quantized models.

Week 5: Keyword Spotting

Component Assignment
Slides Ch 3.5–3.6: KWS architecture, datasets, spectrograms, data engineering
Read Ch 3.5 spectrograms/MFCCs, Ch 3.6 Speech Commands dataset
Colab Spectrograms & MFCCs, Pretrained KWS model, Training KWS
Textbook Benchmarking (Vol I Ch 12)
Due KWS training assignment

Learning Objectives: Describe the end-to-end keyword spotting pipeline. Explain audio preprocessing (spectrograms, MFCCs). Train and evaluate a keyword spotting model. Apply responsible data collection practices.

Week 6: Visual Wake Words and Anomaly Detection

Component Assignment
Slides Ch 3.7–3.9: VWW, anomaly detection, responsible AI development
Read Ch 3.7 MobileNets, transfer learning; Ch 3.8 autoencoders
Colab Transfer Learning, K-means, Autoencoders
Textbook Hardware Acceleration (Vol I Ch 11)
Due Transfer learning assignment, anomaly detection assignment

Learning Objectives: Apply transfer learning for visual wake words. Implement unsupervised anomaly detection with autoencoders. Evaluate bias and fairness in ML datasets and models.


Part III: Deployment (Weeks 7–9)

Goal: Deploy trained models on real microcontroller hardware.

ImportantHardware Required

Starting this week, students need a TinyML kit (Arduino Nano 33 BLE Sense or equivalent). See the Hardware Kits page for ordering information and setup guides.

Week 7: Embedded Systems and TFLite Micro

Component Assignment
Slides Ch 4.1–4.4: Embedded hardware/software, TFLite Micro internals
Read Ch 4.2 C++ for Python users, hardware/software setup; Ch 4.4 interpreter, FlatBuffers, tensor arena
Lab Arduino Blink, TensorFlow install test, sensor verification
Textbook Hardware Acceleration (Vol I Ch 11)
Due Hardware setup verified, sensor test passed

Learning Objectives: Set up the TinyML development environment. Explain the TFLite Micro interpreter architecture. Describe memory allocation (tensor arena) on a microcontroller. Distinguish MCU constraints from cloud/mobile constraints.

TipInstructor Tip

Budget extra office hours this week. Hardware setup is where students hit the most friction. Having a TA who has debugged USB/driver issues is invaluable.

Week 8: Deploying KWS and Custom Datasets

Component Assignment
Slides Ch 4.5–4.6: KWS deployment (init, preprocessing, inference, post-processing), custom datasets
Read Ch 4.5 KWS workflow, Ch 4.6 custom data collection planning
Lab Deploy pretrained KWS, deploy custom keyword model
Textbook Model Serving (Vol I Ch 13)
Due Deployed KWS demo, custom dataset model

Learning Objectives: Deploy a pretrained keyword spotting model on Arduino. Trace the on-device inference pipeline (init → preprocess → infer → post-process). Collect a custom dataset and deploy a personalized KWS model.

Week 9: Deploying VWW and Gesture Recognition

Component Assignment
Slides Ch 4.7–4.9: Person detection, multi-tenancy, Magic Wand, responsible deployment
Read Ch 4.7 multi-tenant applications, Ch 4.8 IMU anatomy, Ch 4.9 privacy and security
Lab Deploy person detection, multi-tenant app, Magic Wand
Textbook Model Serving (Vol I Ch 13)
Due VWW deployment, gesture recognition demo

Learning Objectives: Deploy visual wake words on a camera-equipped MCU. Implement a multi-tenant application (vision + audio). Deploy gesture recognition using IMU data. Articulate privacy and security considerations for on-device ML.


Part IV: MLOps and Scaling (Weeks 10–12, Optional)

Goal: Understand how to manage TinyML models in production at scale.

NoteFlexible Pacing

Part IV can be taught as weeks 10–12 of a 12-week course, or offered as an optional advanced module. For a 10-week course, end after Week 9 with a final project demo week.

Week 10: MLOps Fundamentals and ML Development

Component Assignment
Slides Ch 5.1–5.3: MLOps overview, ML development lifecycle
Read Ch 5.1 MLOps overview, Ch 5.2 DevOps vs. MLOps, Ch 5.3 data selection and feature engineering
Textbook ML Operations (Vol I Ch 14)
Due MLOps case study analysis

Learning Objectives: Define MLOps and distinguish it from DevOps. Trace the ML development lifecycle from problem definition to model evaluation. Explain CI/CD in the context of ML systems.

Week 11: Training, Conversion, and Deployment at Scale

Component Assignment
Slides Ch 5.4–5.7: Training operationalization, continuous training, model conversion, deployment
Read Ch 5.5 AutoML and NAS, Ch 5.6 pruning/clustering/distillation, Ch 5.7 TinyMLaaS
Textbook Responsible Engineering (Vol I Ch 15)
Due Model conversion pipeline exercise

Learning Objectives: Explain continuous training triggers and strategies. Apply model optimization techniques (pruning, clustering, quantization, distillation). Describe the TinyMLaaS architecture for scaled deployment.

Week 12: Monitoring, Management, and Responsible AI

Component Assignment
Slides Ch 5.8–5.12: Prediction serving, continuous monitoring, data/model management, responsible AI
Read Ch 5.9 concept/data drift, Ch 5.11 sustainability and model cards
Textbook Responsible Engineering (Vol I Ch 15)
Due Final project presentation

Learning Objectives: Distinguish batch, online, streaming, and embedded serving scenarios. Explain concept drift and data drift and how to detect them. Design a monitoring strategy for deployed TinyML models. Create a model card for a TinyML application.


Assessment Suggestions

Component Weight Notes
Colab Assignments 30% Weekly coding assignments (Chapters 1–3)
Hardware Labs 30% On-device deployments (Chapters 4)
Quizzes 15% Formative + summative from edX materials
Final Project 25% End-to-end TinyML application of student’s choice
TipFinal Project Ideas

Students choose a sensor modality (audio, vision, IMU, environmental), collect a custom dataset, train and quantize a model, deploy on hardware, and present results. Strong projects include a model card and discussion of failure modes.


Adapting This Syllabus

6-Week Intensive

Cover Parts I–III only (skip Part IV). Compress Weeks 2–3 into one week if students have ML background. Focus on deployment labs.

16-Week Full Course

Expand Part IV to 4 weeks. Add a midterm exam after Week 6. Include a research paper reading component. Pair with Hardware Kits for deeper embedded systems coverage.

As a Module Within ML Systems

Teach Weeks 4–9 (Parts II–III) as a 6-week TinyML module within the AI Systems Foundations course, replacing or supplementing the compression and deployment chapters.

Back to top