Overview
Welcome to the hands-on labs section where you’ll explore deploying ML models onto real embedded devices, which will offer a practical introduction to ML systems. Unlike traditional approaches with large-scale models, these labs focus on interacting directly with both hardware and software. They help us show case various sensor modalities across different application use cases. This approach provides valuable insights into the challenges and opportunities of deploying AI on real physical systems.
Learning Objectives
By completing these labs, we hope learners will:
Gain proficiency in setting up and deploying ML models on supported devices, enabling you to tackle real-world ML deployment scenarios with confidence.
Understand the steps involved in adapting and experimenting with ML models for different applications, allowing you to optimize performance and efficiency.
Learn troubleshooting techniques specific to embedded ML deployments equipping you with the skills to overcome common pitfalls and challenges.
Acquire practical experience in deploying TinyML models on embedded devices bridging the gap between theory and practice.
Explore various sensor modalities and their applications expanding your understanding of how ML can be leveraged in diverse domains.
Foster an understanding of the real-world implications and challenges associated with ML system deployments preparing you for future projects.
Target Audience
These labs are designed for:
Beginners in the field of machine learning who have a keen interest in exploring the intersection of ML and embedded systems.
Developers and engineers looking to apply ML models to real-world applications using low-power, resource-constrained devices.
Enthusiasts and researchers who want to gain practical experience in deploying AI on edge devices and understand the unique challenges involved.
Supported Devices
Exercise | Nicla Vision | XIAO ESP32S3 | Raspberry Pi |
---|---|---|---|
Installation & Setup | ✓ | ✓ | ✓ |
Keyword Spotting (KWS) | ✓ | ✓ | |
Image Classification | ✓ | ✓ | ✓ |
Object Detection | ✓ | ✓ | ✓ |
Motion Detection | ✓ | ✓ | |
Small Language Models (SLM) | ✓ |
Lab Structure
Each lab follows a structured approach:
Introduction: Explore the application and its significance in real-world scenarios.
Setup: Step-by-step instructions to configure the hardware and software environment.
Deployment: Guidance on training and deploying the pre-trained ML models on supported devices.
Exercises: Hands-on tasks to modify and experiment with model parameters.
Discussion: Analysis of results, potential improvements, and practical insights.
Troubleshooting and Support
If you encounter any issues during the labs, consult the troubleshooting comments or check the FAQs within each lab. For further assistance, feel free to reach out to our support team or engage with the community forums.
Credits
Special credit and thanks to Prof. Marcelo Rovai for his valuable contributions to the development and continuous refinement of these labs.