IDE Setup

Setting up your interactive development environment (IDE) is a critical first step that determines your success throughout the laboratory sequence. Unlike cloud-based ML development where infrastructure is abstracted away, embedded systems require you to understand the complete toolchain from code compilation to hardware deployment. This hands-on setup process introduces fundamental concepts about embedded development workflows while preparing your workstation for laboratory exercises.

Environment setup typically requires 30-60 minutes depending on platform choice and internet connection speed. The procedures below are designed to be completed by students with no prior embedded systems experience, with each step building the skills needed for subsequent laboratory work.

After completing hardware selection as outlined in the Hardware Kits chapter, these procedures will establish the development tools, libraries, and verification methods needed for embedded ML programming.

Platform-Specific Software Installation

Each hardware platform requires different development approaches that reflect real-world embedded engineering practices. Arduino-based systems emphasize resource efficiency and real-time constraints, Raspberry Pi demonstrates full-stack edge computing capabilities, while specialized AI hardware showcases dedicated acceleration techniques.

Select the installation procedures appropriate for your chosen hardware platform.

Arduino-Based Platforms (Nicla Vision, XIAOML Kit)

Arduino-based embedded systems provide direct hardware control with minimal abstraction layers, making them ideal for understanding resource constraints and optimization techniques. The development environment emphasizes immediate feedback between code changes and system behavior.

Arduino IDE Installation:

  1. Download Arduino IDE 2.0 from arduino.cc/software
  2. Install following the platform-specific setup wizard
  3. Launch Arduino IDE and navigate to File → Preferences
  4. Add board support URLs:
    • For XIAOML Kit: https://raw.githubusercontent.com/espressif/arduino-esp32/gh-pages/package_esp32_index.json
    • For Nicla Vision: URL provided in Arduino IDE Board Manager

Board Package Installation:

  1. Open Tools → Board → Boards Manager
  2. Search for your platform:
    • XIAOML Kit: Search “ESP32” and install “esp32 by Espressif Systems”
    • Nicla Vision: Search “Arduino Mbed OS Nicla Boards” and install
  3. Select your board from Tools → Board menu
  4. Install required libraries via Library Manager

Essential Libraries:

  • TensorFlow Lite Micro
  • Platform-specific camera drivers
  • Sensor interface libraries (I2C, SPI)

Grove Vision AI V2 Platform

This platform introduces hardware-accelerated AI through dedicated neural processing units, demonstrating how specialized silicon achieves performance improvements impossible with general-purpose processors. The visual programming interface illustrates rapid prototyping capabilities while traditional development environments provide deeper customization options.

SenseCraft AI Setup:

  1. Create account at sensecraft.seeed.cc
  2. Connect Grove Vision AI V2 via USB
  3. Access device through SenseCraft AI web interface
  4. No local software installation required for visual programming workflow

Arduino IDE Setup (for custom development):

Follow Arduino-based platform instructions above, using Seeed Studio board package URL in board manager.

Raspberry Pi Platform

The Raspberry Pi environment bridges embedded constraints with full computing capabilities, allowing students to experience both resource optimization and sophisticated ML frameworks. This dual perspective demonstrates how computational resources influence algorithmic choices and system architecture decisions.

Operating System Installation:

  1. Download Raspberry Pi Imager
  2. Flash Raspberry Pi OS (64-bit recommended) to microSD card (32GB minimum)
  3. Configure SSH access and WiFi credentials during imaging process
  4. Insert SD card and boot Raspberry Pi

Software Environment Setup:

The following commands establish a complete Python-based ML development environment with proper dependency management:

# Update system packages
sudo apt update && sudo apt upgrade -y

# Install Python development tools
# python3-pip: Python package installer
# python3-venv: Virtual environment creation
# python3-dev: Python development headers
sudo apt install python3-pip \
                 python3-venv \
                 python3-dev -y

# Install ML framework dependencies
# libatlas-base-dev: Linear algebra library (BLAS/LAPACK)
# libhdf5-dev: HDF5 data format library
# libhdf5-serial-dev: HDF5 serial version
sudo apt install libatlas-base-dev \
                 libhdf5-dev \
                 libhdf5-serial-dev -y

# Install computer vision dependencies
# libcamera-dev: Camera interface library
# python3-libcamera: Python bindings for libcamera
# python3-kms++: Kernel mode setting library
sudo apt install libcamera-dev \
                 python3-libcamera \
                 python3-kms++ -y

# Create virtual environment for projects
python3 -m venv ~/ml_projects
source ~/ml_projects/bin/activate

# Install core ML packages
# tensorflow: Main ML framework
# tensorflow-lite: Optimized for edge/mobile devices
# opencv-python: Computer vision library
# numpy: Numerical computing foundation
pip install tensorflow \
            tensorflow-lite \
            opencv-python \
            numpy

Development Tool Configuration

Proper tool configuration ensures reliable communication between your development workstation and embedded hardware. These settings establish the foundation for code deployment, debugging, and performance monitoring throughout the laboratory exercises.

Serial Communication Setup

Serial communication provides the primary interface for debugging and data monitoring in embedded systems, offering insights into system behavior that are essential for understanding performance constraints and optimization opportunities.

Windows:

  • Install appropriate USB-to-serial drivers (CH340, FTDI, or platform-specific)
  • Configure Device Manager to recognize development board

macOS/Linux:

  • Most USB-to-serial adapters work without additional drivers
  • Verify device detection: ls /dev/tty* (macOS/Linux)
  • Add user to dialout group: sudo usermod -a -G dialout $USER (Linux)

IDE Configuration

Development environment settings directly impact the efficiency of the code-test-deploy cycle that characterizes embedded development. Proper configuration reduces debugging time and provides clear feedback about system performance.

Arduino IDE Settings:

  • Configure appropriate COM port under Tools → Port
  • Set correct board and processor selection
  • Verify upload speed (typically 115200 baud)
  • Enable verbose output during compilation for debugging

Raspberry Pi Development:

  • Configure SSH keys for remote development
  • Install VS Code with Python and Remote SSH extensions
  • Set up Jupyter notebook access for interactive development

Environment Verification

Verification procedures confirm that your development environment can successfully communicate with hardware and execute basic operations. These tests establish baseline functionality before proceeding to more complex laboratory exercises.

Hardware Detection Tests

The following verification procedures test core functionality required for laboratory exercises, ensuring that both hardware communication and software libraries operate correctly.

Arduino Platforms:

void setup() {
  Serial.begin(115200);
  Serial.println("Development environment test");
  Serial.print("Board: ");
  Serial.println(ARDUINO_BOARD);
}

void loop() {
  Serial.println("Environment operational");
  delay(1000);
}

Raspberry Pi:

# Test camera interface
libcamera-hello --timeout 5000

# Test Python ML environment
python3 -c \
  "import tensorflow as tf; print('TensorFlow:', tf.__version__)"
python3 -c \
  "import cv2; print('OpenCV:', cv2.__version__)"

Grove Vision AI V2:

  • Verify device detection in SenseCraft AI web interface
  • Test basic model deployment through visual programming interface

Common Setup Issues and Solutions

Setup challenges are normal and provide valuable learning opportunities about embedded system constraints and debugging techniques. The following solutions address the most frequently encountered issues during environment configuration.

Device Connection Problems:

  • Verify USB cable supports data transfer (not power-only)
  • Install platform-specific USB drivers if device not recognized
  • Try different USB ports or USB hubs if connection unstable

Compilation Errors:

  • Confirm correct board and processor selection in IDE
  • Verify all required libraries installed with compatible versions
  • Check for sufficient disk space for compilation process

Runtime Issues:

  • Ensure adequate power supply (especially for camera operations)
  • Verify SD card compatibility and formatting (Raspberry Pi)
  • Check memory allocation for ML models within platform constraints

Network Connectivity (WiFi-enabled platforms):

  • Confirm network credentials and security protocols
  • Check firewall settings for development tool access
  • Verify network allows device-to-development machine communication

Troubleshooting and Support

Common Hardware Issues: - Device not recognized: Ensure USB cable supports data transfer, try different ports - Upload failures: Check board selection and port configuration in IDE - Power issues: Verify adequate power supply, especially for camera operations - Memory errors: Confirm model size fits within platform constraints

Software Setup Issues: - Library conflicts: Use compatible versions specified in setup guides - Compilation errors: Verify all dependencies installed correctly - Network connectivity: Check firewall settings and network permissions

Platform-Specific Resources: - XIAOML Kit: Seeed Studio Documentation - Arduino Nicla Vision: Arduino Documentation - Grove Vision AI V2: SenseCraft AI Platform - Raspberry Pi: Official Documentation

Community Support: - GitHub Issues: Report bugs and request features through the project repository - Discussion Forums: Platform-specific communities on Arduino, Raspberry Pi, and Seeed Studio websites - Stack Overflow: Tag questions with appropriate platform tags for community assistance

Ready for Laboratory Exercises

With your development environment configured and verified, you have established the foundational tools needed for embedded ML programming. The skills developed during environment setup—understanding toolchains, managing dependencies, and verifying system functionality—apply throughout all subsequent laboratory work.

Your configured environment now supports the complete development workflow from algorithm implementation through hardware deployment and performance optimization. The Laboratory Overview provides exercise categories organized by complexity and learning objectives, designed to build systematically on these foundational capabilities.

Recommended starting sequence:

  1. Begin with basic sensor exercises to verify hardware functionality
  2. Progress to single-modality ML applications (image or audio)
  3. Advance to multi-modal and optimization exercises

Each laboratory exercise includes detailed implementation procedures, expected performance benchmarks, and troubleshooting guidance specific to the project requirements. The development environment you have established provides the foundation for exploring the complete spectrum of embedded ML applications and optimization techniques.

Back to top