LABS

The following labs provide a unique opportunity to gain hands-on experience deploying tinyML models onto real embedded devices. In contrast to working with large models that require data center-scale resources, these labs allow you to interact directly with the hardware and software, giving you a tangible understanding of the challenges and opportunities in embedded AI.

From setting up the Nicla Vision board to implementing computer vision, audio processing, and motion classification tasks using tools like TensorFlow Lite for Microcontrollers and Arduino firmware, you’ll develop practical skills in deploying efficient AI models on resource-constrained devices. By completing these labs, you’ll appreciate the beauty of tinyML—the ability to hold cutting-edge AI technology in the palm of your hand. This hands-on perspective is invaluable for understanding the end-to-end workflow of embedded AI systems and will prepare you for real-world applications where model efficiency, robustness, and responsiveness are paramount. In the future, we plan to add a few other platforms. Please stay tuned!

These lab exercises are the contributions of Marcelo Rovai.