14  Security & Privacy

Resources: Slides, Videos, Exercises, Labs

DALL·E 3 Prompt: An illustration on privacy and security in machine learning systems. The image shows a digital landscape with a network of interconnected nodes and data streams, symbolizing machine learning algorithms. In the foreground, there’s a large lock superimposed over the network, representing privacy and security. The lock is semi-transparent, allowing the underlying network to be partially visible. The background features binary code and digital encryption symbols, emphasizing the theme of cybersecurity. The color scheme is a mix of blues, greens, and grays, suggesting a high-tech, digital environment.

Security and privacy are critical when developing real-world machine learning systems. As machine learning is increasingly applied to sensitive domains like healthcare, finance, and personal data, protecting confidentiality and preventing misuse of data and models becomes imperative. Anyone aiming to build robust and responsible ML systems must grasp potential security and privacy risks such as data leaks, model theft, adversarial attacks, bias, and unintended access to private information. We also need to understand best practices for mitigating these risks. Most importantly, security and privacy cannot be an afterthought and must be proactively addressed throughout the ML system development lifecycle - from data collection and labeling to model training, evaluation, and deployment. Embedding security and privacy considerations into each stage of building, deploying, and managing machine learning systems is essential for safely unlocking the benefits of A.I.

Learning Objectives
  • Understand key ML privacy and security risks, such as data leaks, model theft, adversarial attacks, bias, and unintended data access.

  • Learn from historical hardware and embedded systems security incidents.

  • Identify threats to ML models like data poisoning, model extraction, membership inference, and adversarial examples.

  • Recognize hardware security threats to embedded ML spanning hardware bugs, physical attacks, side channels, counterfeit components, etc.

  • Explore embedded ML defenses, such as trusted execution environments, secure boot, physical unclonable functions, and hardware security modules.

  • Discuss privacy issues handling sensitive user data with embedded ML, including regulations.

  • Learn privacy-preserving ML techniques like differential privacy, federated learning, homomorphic encryption, and synthetic data generation.

  • Understand trade-offs between privacy, accuracy, efficiency, threat models, and trust assumptions.

  • Recognize the need for a cross-layer perspective spanning electrical, firmware, software, and physical design when securing embedded ML devices.

14.1 Introduction

Machine learning has evolved substantially from its academic origins, where privacy was not a primary concern. As ML migrated into commercial and consumer applications, the data became more sensitive - encompassing personal information like communications, purchases, and health data. This explosion of data availability fueled rapid advancements in ML capabilities. However, it also exposed new privacy risks, as demonstrated by incidents like the AOL data leak in 2006 and the Cambridge Analytica scandal.

These events highlighted the growing need to address privacy in ML systems. In this chapter, we explore privacy and security considerations together, as they are inherently linked in ML:

  • Privacy refers to controlling access to sensitive user data, such as financial information or biometric data collected by an ML application.

  • Security protects ML systems and data from hacking, theft, and misuse.

For example, an ML-powered home security camera must secure video feeds against unauthorized access and provide privacy protections to ensure only intended users can view the footage. A breach of either security or privacy could expose private user moments.

Embedded ML systems like smart assistants and wearables are ubiquitous and process intimate user data. However, their computational constraints often prevent heavy security protocols. Designers must balance performance needs with rigorous security and privacy standards tailored to embedded hardware limitations.

This chapter provides essential knowledge for addressing the complex privacy and security landscape of embedded ML. We will explore vulnerabilities and cover various techniques that enhance privacy and security within embedded systems’ resource constraints.

We hope that by building a holistic understanding of risks and safeguards, you will gain the principles to develop secure, ethical, embedded ML applications.

14.2 Terminology

In this chapter, we will discuss security and privacy together, so there are key terms that we need to be clear about.

  • Privacy: Consider an ML-powered home security camera that identifies and records potential threats. This camera records identifiable information of individuals approaching and potentially entering this home, including faces. Privacy concerns may surround who can access this data.

  • Security: Consider an ML-powered home security camera that identifies and records potential threats. The security aspect would ensure that hackers cannot access these video feeds and recognition models.

  • Threat: Using our home security camera example, a threat could be a hacker trying to access live feeds or stored videos or using false inputs to trick the system.

  • Vulnerability: A common vulnerability might be a poorly secured network through which the camera connects to the internet, which could be exploited to access the data.

14.3 Historical Precedents

While the specifics of machine learning hardware security can be distinct, the embedded systems field has a history of security incidents that provide critical lessons for all connected systems, including those using ML. Here are detailed explorations of past breaches:

14.3.1 Stuxnet

In 2010, something unexpected was found on a computer in Iran - a very complicated computer virus that experts had never seen before. Stuxnet was a malicious computer worm that targeted supervisory control and data acquisition (SCADA) systems and was designed to damage Iran’s nuclear program (Farwell and Rohozinski 2011). Stuxnet was using four “zero-day exploits” - attacks that take advantage of secret weaknesses in software that no one knows about yet. This made Stuxnet very sneaky and hard to detect.

Farwell, James P., and Rafal Rohozinski. 2011. “Stuxnet and the Future of Cyber War.” Survival 53 (1): 23–40. https://doi.org/10.1080/00396338.2011.555586.

But Stuxnet wasn’t designed to steal information or spy on people. Its goal was physical destruction - to sabotage centrifuges at Iran’s Natanz nuclear plant! So how did the virus get onto computers at the Natanz plant, which was supposed to be disconnected from the outside world for security? Experts think someone inserted a USB stick containing Stuxnet into the internal Natanz network. This allowed the virus to “jump” from an outside system onto the isolated nuclear control systems and wreak havoc.

Stuxnet was incredibly advanced malware built by national governments to cross from the digital realm into real-world infrastructure. It specifically targeted important industrial machines, where embedded machine learning is highly applicable in a way never done before. The virus provided a wake-up call about how sophisticated cyberattacks could now physically destroy equipment and facilities.

This breach was significant due to its sophistication; Stuxnet specifically targeted programmable logic controllers (PLCs) used to automate electromechanical processes such as the speed of centrifuges for uranium enrichment. The worm exploited vulnerabilities in the Windows operating system to gain access to the Siemens Step7 software controlling the PLCs. Despite not being a direct attack on ML systems, Stuxnet is relevant for all embedded systems as it showcases the potential for state-level actors to design attacks that bridge the cyber and physical worlds with devastating effects. Figure 14.1 explains Stuxnet in greater detail.

Figure 14.1: Stuxnet explained. Source: IEEE Spectrum

14.3.2 Jeep Cherokee Hack

The Jeep Cherokee hack was a groundbreaking event demonstrating the risks inherent in increasingly connected automobiles (Miller 2019). In a controlled demonstration, security researchers remotely exploited a vulnerability in the Uconnect entertainment system, which had a cellular connection to the internet. They were able to control the vehicle’s engine, transmission, and brakes, alarming the automotive industry into recognizing the severe safety implications of cyber vulnerabilities in vehicles. Video 14.1 below is a short documentary of the attack.

Miller, Charlie. 2019. “Lessons Learned from Hacking a Car.” IEEE Design &Amp; Test 36 (6): 7–9. https://doi.org/10.1109/mdat.2018.2863106.
Video 14.1: Jeep Cherokee Hack

While this wasn’t an attack on an ML system per se, the reliance of modern vehicles on embedded systems for safety-critical functions has significant parallels to the deployment of ML in embedded systems, underscoring the need for robust security at the hardware level.

14.3.3 Mirai Botnet

The Mirai botnet involved the infection of networked devices such as digital cameras and DVR players (Antonakakis et al. 2017). In October 2016, the botnet was used to conduct one of the largest DDoS attacks, disrupting internet access across the United States. The attack was possible because many devices used default usernames and passwords, which were easily exploited by the Mirai malware to control the devices. Video 14.2 explains how the Mirai Botnet works.

Antonakakis, Manos, Tim April, Michael Bailey, Matt Bernhard, Elie Bursztein, Jaime Cochran, Zakir Durumeric, et al. 2017. “Understanding the Mirai Botnet.” In 26th USENIX Security Symposium (USENIX Security 17), 1093–1110.
Video 14.2: Mirai Botnet

Although the devices were not ML-based, the incident is a stark reminder of what can happen when numerous embedded devices with poor security controls are networked, which is becoming more common with the growth of ML-based IoT devices.

14.3.4 Implications

These historical breaches demonstrate the cascading effects of hardware vulnerabilities in embedded systems. Each incident offers a precedent for understanding the risks and designing better security protocols. For instance, the Mirai botnet highlights the immense destructive potential when threat actors can gain control over networked devices with weak security, a situation becoming increasingly common with ML systems. Many current ML devices function as “edge” devices meant to collect and process data locally before sending it to the cloud. Much like the cameras and DVRs compromised by Mirai, edge ML devices often rely on embedded hardware like ARM processors and run lightweight O.S. like Linux. Securing the device credentials is critical.

Similarly, the Jeep Cherokee hack was a watershed moment for the automotive industry. It exposed serious vulnerabilities in the growing network-connected vehicle systems and their lack of isolation from core drive systems like brakes and steering. In response, auto manufacturers invested heavily in new cybersecurity measures, though gaps likely remain.

Chrysler did a recall to patch the vulnerable Uconnect software, allowing the remote exploit. This included adding network-level protections to prevent unauthorized external access and compartmentalizing in-vehicle systems to limit lateral movement. Additional layers of encryption were added for commands sent over the CAN bus within vehicles.

The incident also spurred the creation of new cybersecurity standards and best practices. The Auto-ISAC was established for automakers to share intelligence, and the NHTSA guided management risks. New testing and audit procedures were developed to assess vulnerabilities proactively. The aftereffects continue to drive change in the automotive industry as cars become increasingly software-defined.

Unfortunately, manufacturers often overlook security when developing new ML edge devices - using default passwords, unencrypted communications, unsecured firmware updates, etc. Any such vulnerabilities could allow attackers to gain access and control devices at scale by infecting them with malware. With a botnet of compromised ML devices, attackers could leverage their aggregated computational power for DDoS attacks on critical infrastructure.

While these events didn’t directly involve machine learning hardware, the principles of the attacks carry over to ML systems, which often involve similar embedded devices and network architectures. As ML hardware is increasingly integrated with the physical world, securing it against such breaches is paramount. The evolution of security measures in response to these incidents provides valuable insights into protecting current and future ML systems from analogous vulnerabilities.

The distributed nature of ML edge devices means threats can propagate quickly across networks. And if devices are being used for mission-critical purposes like medical devices, industrial controls, or self-driving vehicles, the potential physical damage from weaponized ML bots could be severe. Just like Mirai demonstrated the dangerous potential of poorly secured IoT devices, the litmus test for ML hardware security will be how vulnerable or resilient these devices are to worm-like attacks. The stakes are raised as ML spreads to safety-critical domains, putting the onus on manufacturers and system operators to incorporate the lessons from Mirai.

The lesson is the importance of designing for security from the outset and having layered defenses. The Jeep case highlights potential vulnerabilities for ML systems around externally facing software interfaces and isolation between subsystems. Manufacturers of ML devices and platforms should assume a similar proactive and comprehensive approach to security rather than leaving it as an afterthought. Rapid response and dissemination of best practices will be crucial as threats evolve.

14.4 Security Threats to ML Models

ML models face security risks that can undermine their integrity, performance, and trustworthiness if not adequately addressed. While there are several different threats, the primary threats include: Model theft, where adversaries steal the proprietary model parameters and the sensitive data they contain. Data poisoning, which compromises models through data tampering. Adversarial attacks deceive the model to make incorrect or unwanted predictions.

14.4.1 Model Theft

Model theft occurs when an attacker gains unauthorized access to a deployed ML model. The concern here is the theft of the model’s structure and trained parameters and the proprietary data it contains (Ateniese et al. 2015). Model theft is a real and growing threat, as demonstrated by cases like ex-Google engineer Anthony Levandowski, who allegedly stole Waymo’s self-driving car designs and started a competing company. Beyond economic impacts, model theft can seriously undermine privacy and enable further attacks.

Ateniese, Giuseppe, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. “Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers.” Int. J. Secur. Netw. 10 (3): 137. https://doi.org/10.1504/ijsn.2015.071829.

For instance, consider an ML model developed for personalized recommendations in an e-commerce application. If a competitor steals this model, they gain insights into business analytics, customer preferences, and even trade secrets embedded within the model’s data. Attackers could leverage stolen models to craft more effective inputs for model inversion attacks, deducing private details about the model’s training data. A cloned e-commerce recommendation model could reveal customer purchase behaviors and demographics.

To understand model inversion attacks, consider a facial recognition system used to grant access to secured facilities. The system is trained on a dataset of employee photos. An attacker could infer features of the original dataset by observing the model’s output to various inputs. For example, suppose the model’s confidence level for a particular face is significantly higher for a given set of features. In that case, an attacker might deduce that someone with those features is likely in the training dataset.

The methodology of model inversion typically involves the following steps:

  • Accessing Model Outputs: The attacker queries the ML model with input data and observes the outputs. This is often done through a legitimate interface, like a public API.

  • Analyzing Confidence Scores: For each input, the model provides a confidence score that reflects how similar the input is to the training data.

  • Reverse-Engineering: By analyzing the confidence scores or output probabilities, attackers can use optimization techniques to reconstruct what they believe is close to the original input data.

One historical example of such a vulnerability being explored was the research on inversion attacks against the U.S. Netflix Prize dataset, where researchers demonstrated that it was possible to learn about an individual’s movie preferences, which could lead to privacy breaches (Narayanan and Shmatikov 2006).

Narayanan, Arvind, and Vitaly Shmatikov. 2006. “How to Break Anonymity of the Netflix Prize Dataset.” arXiv Preprint Cs/0610105.

Model theft implies that it could lead to economic losses, undermine competitive advantage, and violate user privacy. There’s also the risk of model inversion attacks, where an adversary could input various data into the stolen model to infer sensitive information about the training data.

Based on the desired asset, model theft attacks can be divided into two categories: exact model properties and approximate model behavior.

Stealing Exact Model Properties

In these attacks, the objective is to extract information about concrete metrics, such as a network’s learned parameters, fine-tuned hyperparameters, and the model’s internal layer architecture (Oliynyk, Mayer, and Rauber 2023).

  • Learned Parameters: Adversaries aim to steal a model’s learned knowledge (weights and biases) to replicate it. Parameter theft is generally used with other attacks, such as architecture theft, which lacks parameter knowledge.

  • Fine-Tuned Hyperparameters: Training is costly, and identifying the optimal configuration of hyperparameters (such as learning rate and regularization) can be time-consuming and resource-intensive. Consequently, stealing a model’s optimized hyperparameters enables adversaries to replicate the model without incurring the exact development costs.

  • Model Architecture: This attack concerns the specific design and structure of the model, such as layers, neurons, and connectivity patterns. Beyond reducing associated training costs, this theft poses a severe risk to intellectual property, potentially undermining a company’s competitive advantage. Architecture theft can be achieved by exploiting side-channel attacks (discussed later).

Stealing Approximate Model Behavior

Instead of extracting exact numerical values of the model’s parameters, these attacks aim to reproduce the model’s behavior (predictions and effectiveness), decision-making, and high-level characteristics (Oliynyk, Mayer, and Rauber 2023). These techniques aim to achieve similar outcomes while allowing for internal deviations in parameters and architecture. Types of approximate behavior theft include gaining the same level of effectiveness and obtaining prediction consistency.

Oliynyk, Daryna, Rudolf Mayer, and Andreas Rauber. 2023. “I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences.” ACM Comput. Surv. 55 (14s): 1–41. https://doi.org/10.1145/3595292.
  • Level of Effectiveness: Attackers aim to replicate the model’s decision-making capabilities rather than focus on the precise parameter values. This is done through understanding the overall behavior of the model. Consider a scenario where an attacker wants to copy the behavior of an image classification model. By analyzing the model’s decision boundaries, the attack tunes its model to reach an effectiveness comparable to the original model. This could entail analyzing 1) the confusion matrix to understand the balance of prediction metrics (true positive, true negative, false positive, false negative) and 2) other performance metrics, such as F1 score and precision, to ensure that the two models are comparable.

  • Prediction Consistency: The attacker tries to align their model’s prediction patterns with the target model’s. This involves matching prediction outputs (both positive and negative) on the same set of inputs and ensuring distributional consistency across different classes. For instance, consider a natural language processing (NLP) model that generates sentiment analysis for movie reviews (labels reviews as positive, neutral, or negative). The attacker will try to fine-tune their model to match the prediction of the original models on the same set of movie reviews. This includes ensuring that the model makes the same mistakes (mispredictions) that the targeted model makes.

Case Study

In 2018, Tesla filed a lawsuit against self-driving car startup Zoox, alleging former employees stole confidential data and trade secrets related to Tesla’s autonomous driving assistance system.

Tesla claimed that several of its former employees took over 10 G.B. of proprietary data, including ML models and source code, before joining Zoox. This allegedly included one of Tesla’s crucial image recognition models for identifying objects.

The theft of this sensitive proprietary model could help Zoox shortcut years of ML development and duplicate Tesla’s capabilities. Tesla argued this theft of I.P. caused significant financial and competitive harm. There were also concerns it could allow model inversion attacks to infer private details about Tesla’s testing data.

The Zoox employees denied stealing any proprietary information. However, the case highlights the significant risks of model theft—enabling the cloning of commercial models, causing economic impacts, and opening the door for further data privacy violations.

14.4.2 Data Poisoning

Data poisoning is an attack where the training data is tampered with, leading to a compromised model (Biggio, Nelson, and Laskov 2012). Attackers can modify existing training examples, insert new malicious data points, or influence the data collection process. The poisoned data is labeled in such a way as to skew the model’s learned behavior. This can be particularly damaging in applications where ML models make automated decisions based on learned patterns. Beyond training sets, poisoning tests and validation data can allow adversaries to boost reported model performance artificially.

Biggio, Battista, Blaine Nelson, and Pavel Laskov. 2012. “Poisoning Attacks Against Support Vector Machines.” In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress. http://icml.cc/2012/papers/880.pdf.

The process usually involves the following steps:

  • Injection: The attacker adds incorrect or misleading examples into the training set. These examples are often designed to look normal to cursory inspection but have been carefully crafted to disrupt the learning process.

  • Training: The ML model trains on this manipulated dataset and develops skewed understandings of the data patterns.

  • Deployment: Once the model is deployed, the corrupted training leads to flawed decision-making or predictable vulnerabilities the attacker can exploit.

The impacts of data poisoning extend beyond just classification errors or accuracy drops. For instance, if incorrect or malicious data is introduced into a traffic sign recognition system’s training set, the model may learn to misclassify stop signs as yield signs, which can have dangerous real-world consequences, especially in embedded autonomous systems like autonomous vehicles.

Data poisoning can degrade a model’s accuracy, force it to make incorrect predictions or cause it to behave unpredictably. In critical applications like healthcare, such alterations can lead to significant trust and safety issues.

There are six main categories of data poisoning (Oprea, Singhal, and Vassilev 2022):

Oprea, Alina, Anoop Singhal, and Apostol Vassilev. 2022. “Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?” Computer 55 (11): 94–99. https://doi.org/10.1109/mc.2022.3190787.
  • Availability Attacks: These attacks seek to compromise a model’s overall functionality. They cause it to misclassify most testing samples, rendering the model unusable for practical applications. An example is label flipping, where labels of a specific, targeted class are replaced with labels from a different one.

  • Targeted Attacks: Unlike availability attacks, targeted attacks aim to compromise a small number of the testing samples. So, the effect is localized to a limited number of classes, while the model maintains the same original level of accuracy on most of the classes. The targeted nature of the attack requires the attacker to possess knowledge of the model’s classes, making detecting these attacks more challenging.

  • Backdoor Attacks: In these attacks, an adversary targets specific patterns in the data. The attacker introduces a backdoor (a malicious, hidden trigger or pattern) into the training data, such as altering certain features in structured data or a pattern of pixels at a fixed position. This causes the model to associate the malicious pattern with specific labels. As a result, when the model encounters test samples that contain a malicious pattern, it makes false predictions, highlighting the importance of caution and prevention in the role of data security professionals.

  • Subpopulation Attacks: Attackers selectively choose to compromise a subset of the testing samples while maintaining accuracy on the rest of the samples. You can think of these attacks as a combination of availability and targeted attacks: performing availability attacks (performance degradation) within the scope of a targeted subset. Although subpopulation attacks may seem very similar to targeted attacks, the two have clear differences:

  • Scope: While targeted attacks target a selected set of samples, subpopulation attacks target a general subpopulation with similar feature representations. For example, in a targeted attack, an actor inserts manipulated images of a ‘speed bump’ warning sign (with carefully crafted perturbation or patterns), which causes an autonomous car to fail to recognize such a sign and slow down. On the other hand, manipulating all samples of people with a British accent so that a speech recognition model would misclassify a British person’s speech is an example of a subpopulation attack.

  • Knowledge: While targeted attacks require a high degree of familiarity with the data, subpopulation attacks require less intimate knowledge to be effective.

Case Study 1

In 2017, researchers demonstrated a data poisoning attack against a popular toxicity classification model called Perspective (Hosseini et al. 2017). This ML model detects toxic comments online.

Hosseini, Hossein, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. “Deceiving Google’s Perspective Api Built for Detecting Toxic Comments.” ArXiv Preprint abs/1702.08138. https://arxiv.org/abs/1702.08138.

The researchers added synthetically generated toxic comments with slight misspellings and grammatical errors to the model’s training data. This slowly corrupted the model, causing it to misclassify increasing numbers of severely toxic inputs as non-toxic over time.

After retraining on the poisoned data, the model’s false negative rate increased from 1.4% to 27% - allowing extremely toxic comments to bypass detection. The researchers warned this stealthy data poisoning could enable the spread of hate speech, harassment, and abuse if deployed against real moderation systems.

This case highlights how data poisoning can degrade model accuracy and reliability. For social media platforms, a poisoning attack that impairs toxicity detection could lead to the proliferation of harmful content and distrust of ML moderation systems. The example demonstrates why securing training data integrity and monitoring for poisoning is critical across application domains.

Case Study 2

Interestingly enough, data poisoning attacks are not always malicious (Shan et al. 2023). Nightshade, a tool developed by a team led by Professor Ben Zhao at the University of Chicago, utilizes data poisoning to help artists protect their art against scraping and copyright violations by generative A.I. models. Artists can use the tool to modify their images subtly before uploading them online.

While these changes are imperceptible to the human eye, they can significantly degrade the performance of generative AI models when integrated into the training data. Generative models can be manipulated to produce unrealistic or nonsensical outputs. For example, with just 300 corrupted images, the University of Chicago researchers could deceive the latest Stable Diffusion model into generating images of canines resembling felines or bovines when prompted for automobiles.

As the quantity of corrupted images online grows, the efficacy of models trained on scraped data will decline exponentially. Initially, identifying corrupted data is challenging and necessitates manual intervention. Subsequently, contamination spreads rapidly to related concepts as generative models establish connections between words and their visual representations. Consequently, a corrupted image of a “car” could propagate into generated images linked to terms such as “truck,” “train,” and “bus.”

On the other hand, this tool can be used maliciously and affect legitimate generative model applications. This shows the very challenging and novel nature of machine learning attacks.

Figure 17.26 demonstrates the effects of different levels of data poisoning (50 samples, 100 samples, and 300 samples of poisoned images) on generating images in various categories. Notice how the images start deforming and deviating from the desired category. For example, after 300 poison samples, a car prompt generates a cow.

Figure 14.2: Data poisoning. Source: Shan et al. (2023).
Shan, Shawn, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y Zhao. 2023. “Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models.” ArXiv Preprint abs/2310.13828. https://arxiv.org/abs/2310.13828.

14.4.3 Adversarial Attacks

Adversarial attacks aim to trick models into making incorrect predictions by providing them with specially crafted, deceptive inputs (called adversarial examples) (Parrish et al. 2023). By adding slight perturbations to input data, adversaries can “hack” a model’s pattern recognition and deceive it. These are sophisticated techniques where slight, often imperceptible alterations to input data can trick an ML model into making a wrong prediction.

Parrish, Alicia, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, et al. 2023. “Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models.” ArXiv Preprint abs/2305.14384. https://arxiv.org/abs/2305.14384.
Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. “Zero-Shot Text-to-Image Generation.” In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, edited by Marina Meila and Tong Zhang, 139:8821–31. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v139/ramesh21a.html.
Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. 2022. “High-Resolution Image Synthesis with Latent Diffusion Models.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. https://doi.org/10.1109/cvpr52688.2022.01042.

One can generate prompts that lead to unsafe images in text-to-image models like DALLE (Ramesh et al. 2021) or Stable Diffusion (Rombach et al. 2022). For example, by altering the pixel values of an image, attackers can deceive a facial recognition system into identifying a face as a different person.

Adversarial attacks exploit the way ML models learn and make decisions during inference. These models work on the principle of recognizing patterns in data. An adversary crafts malicious inputs with perturbations to mislead the model’s pattern recognition—essentially ‘hacking’ the model’s perceptions.

Adversarial attacks fall under different scenarios:

  • Whitebox Attacks: The attacker has comprehensive knowledge of the target model’s internal workings, including the training data, parameters, and architecture. This extensive access facilitates the exploitation of the model’s vulnerabilities. The attacker can leverage specific and subtle weaknesses to construct highly effective adversarial examples.

  • Blackbox Attacks: In contrast to whitebox attacks, in blackbox attacks, the attacker has little to no knowledge of the target model. The adversarial actor must carefully observe the model’s output behavior to carry out the attack.

  • Greybox Attacks: These attacks occupy a spectrum between black-box and white-box attacks. The adversary possesses partial knowledge of the target model’s internal structure. For instance, the attacker might know the training data but lack information about the model’s architecture or parameters. In practical scenarios, most attacks fall within this grey area.

The landscape of machine learning models is complex and broad, especially given their relatively recent integration into commercial applications. This rapid adoption, while transformative, has brought to light numerous vulnerabilities within these models. Consequently, various adversarial attack methods have emerged, each strategically exploiting different aspects of different models. Below, we highlight a subset of these methods, showcasing the multifaceted nature of adversarial attacks on machine learning models:

  • Generative Adversarial Networks (GANs) are deep learning models consisting of two networks competing against each other: a generator and a discriminator (Goodfellow et al. 2020). The generator tries to synthesize realistic data while the discriminator evaluates whether they are real or fake. GANs can be used to craft adversarial examples. The generator network is trained to produce inputs that the target model misclassifies. These GAN-generated images can then attack a target classifier or detection model. The generator and the target model are engaged in a competitive process, with the generator continually improving its ability to create deceptive examples and the target model enhancing its resistance to such examples. GANs provide a robust framework for crafting complex and diverse adversarial inputs, illustrating the adaptability of generative models in the adversarial landscape.

  • Transfer Learning Adversarial Attacks exploit the knowledge transferred from a pre-trained model to a target model, creating adversarial examples that can deceive both models. These attacks pose a growing concern, particularly when adversaries have knowledge of the feature extractor but lack access to the classification head (the part or layer responsible for making the final classifications). Referred to as “headless attacks,” these transferable adversarial strategies leverage the expressive capabilities of feature extractors to craft perturbations while oblivious to the label space or training data. The existence of such attacks underscores the importance of developing robust defenses for transfer learning applications, especially since pre-trained models are commonly used (Abdelkader et al. 2020).

Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. “Generative Adversarial Networks.” Commun. ACM 63 (11): 139–44. https://doi.org/10.1145/3422622.
Abdelkader, Ahmed, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, and Chen Zhu. 2020. “Headless Horseman: Adversarial Attacks on Transfer Learning Models.” In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3087–91. IEEE. https://doi.org/10.1109/icassp40776.2020.9053181.

Case Study

In 2017, researchers conducted experiments by placing small black and white stickers on stop signs (Eykholt et al. 2017). When viewed by a normal human eye, the stickers did not obscure the sign or prevent interpretability. However, when images of the stickers stop signs were fed into standard traffic sign classification ML models, they were misclassified as speed limit signs over 85% of the time.

Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. “Robust Physical-World Attacks on Deep Learning Models.” ArXiv Preprint abs/1707.08945. https://arxiv.org/abs/1707.08945.

This demonstration showed how simple adversarial stickers could trick ML systems into misreading critical road signs. If deployed realistically, these attacks could endanger public safety, causing autonomous vehicles to misinterpret stop signs as speed limits. Researchers warned this could potentially cause dangerous rolling stops or acceleration into intersections.

This case study provides a concrete illustration of how adversarial examples exploit the pattern recognition mechanisms of ML models. By subtly altering the input data, attackers can induce incorrect predictions and pose significant risks to safety-critical applications like self-driving cars. The attack’s simplicity demonstrates how even minor, imperceptible changes can lead models astray. Consequently, developers must implement robust defenses against such threats.

14.5 Security Threats to ML Hardware

A systematic examination of security threats to embedded machine learning hardware is essential to comprehensively understanding potential vulnerabilities in ML systems. Initially, hardware vulnerabilities arising from intrinsic design flaws that can be exploited will be explored. This foundational knowledge is crucial for recognizing the origins of hardware weaknesses. Following this, physical attacks will be examined, representing the most direct and overt methods of compromising hardware integrity. Building on this, fault injection attacks will be analyzed, demonstrating how deliberate manipulations can induce system failures.

Advancing to side-channel attacks next will show the increasing complexity, as these rely on exploiting indirect information leakages, requiring a nuanced understanding of hardware operations and environmental interactions. Leaky interfaces will show how external communication channels can become vulnerable, leading to accidental data exposures. Counterfeit hardware discussions benefit from prior explorations of hardware integrity and exploitation techniques, as they often compound these issues with additional risks due to their questionable provenance. Finally, supply chain risks encompass all concerns above and frame them within the context of the hardware’s journey from production to deployment, highlighting the multifaceted nature of hardware security and the need for vigilance at every stage.

Table 14.1 overview table summarizing the topics:

Table 14.1: Threat types on hardware security.
Threat Type Description Relevance to ML Hardware Security
Hardware Bugs Intrinsic flaws in hardware designs that can compromise system integrity. Foundation of hardware vulnerability.
Physical Attacks Direct exploitation of hardware through physical access or manipulation. Basic and overt threat model.
Fault-injection Attacks Induction of faults to cause errors in hardware operation, leading to potential system crashes. Systematic manipulation leading to failure.
Side-Channel Attacks Exploitation of leaked information from hardware operation to extract sensitive data. Indirect attack via environmental observation.
Leaky Interfaces Vulnerabilities arising from interfaces that expose data unintentionally. Data exposure through communication channels.
Counterfeit Hardware Use of unauthorized hardware components that may have security flaws. Compounded vulnerability issues.
Supply Chain Risks Risks introduced through the hardware lifecycle, from production to deployment. Cumulative & multifaceted security challenges.

14.5.1 Hardware Bugs

Hardware is not immune to the pervasive issue of design flaws or bugs. Attackers can exploit these vulnerabilities to access, manipulate, or extract sensitive data, breaching the confidentiality and integrity that users and services depend on. An example of such vulnerabilities came to light with the discovery of Meltdown and Spectre—two hardware vulnerabilities that exploit critical vulnerabilities in modern processors. These bugs allow attackers to bypass the hardware barrier that separates applications, allowing a malicious program to read the memory of other programs and the operating system.

Meltdown (Kocher et al. 2019a) and Spectre (Kocher et al. 2019b) work by taking advantage of optimizations in modern CPUs that allow them to speculatively execute instructions out of order before validity checks have been completed. This reveals data that should be inaccessible, which the attack captures through side channels like caches. The technical complexity demonstrates the difficulty of eliminating vulnerabilities even with extensive validation.

———, et al. 2019a. “Spectre Attacks: Exploiting Speculative Execution.” In 2019 IEEE Symposium on Security and Privacy (SP). IEEE. https://doi.org/10.1109/sp.2019.00002.
Kocher, Paul, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, et al. 2019b. “Spectre Attacks: Exploiting Speculative Execution.” In 2019 IEEE Symposium on Security and Privacy (SP). IEEE. https://doi.org/10.1109/sp.2019.00002.

If an ML system is processing sensitive data, such as personal user information or proprietary business analytics, Meltdown and Spectre represent a real and present danger to data security. Consider the case of an ML accelerator card designed to speed up machine learning processes, such as the ones we discussed in the A.I. Hardware chapter. These accelerators work with the CPU to handle complex calculations, often related to data analytics, image recognition, and natural language processing. If such an accelerator card has a vulnerability akin to Meltdown or Spectre, it could leak the data it processes. An attacker could exploit this flaw not just to siphon off data but also to gain insights into the ML model’s workings, including potentially reverse-engineering the model itself (thus, going back to the issue of model theft.

A real-world scenario where this could be devastating would be in the healthcare industry. ML systems routinely process highly sensitive patient data to help diagnose, plan treatment, and forecast outcomes. A bug in the system’s hardware could lead to the unauthorized disclosure of personal health information, violating patient privacy and contravening strict regulatory standards like the Health Insurance Portability and Accountability Act (HIPAA)

The Meltdown and Spectre vulnerabilities are stark reminders that hardware security is not just about preventing unauthorized physical access but also about ensuring that the hardware’s architecture does not become a conduit for data exposure. Similar hardware design flaws regularly emerge in CPUs, accelerators, memory, buses, and other components. This necessitates ongoing retroactive mitigations and performance trade-offs in deployed systems. Proactive solutions like confidential computing architectures could mitigate entire classes of vulnerabilities through fundamentally more secure hardware design. Thwarting hardware bugs requires rigor at every design stage, validation, and deployment.

14.5.2 Physical Attacks

Physical tampering refers to the direct, unauthorized manipulation of physical computing resources to undermine the integrity of machine learning systems. It’s a particularly insidious attack because it circumvents traditional cybersecurity measures, which often focus more on software vulnerabilities than hardware threats.

Physical tampering can take many forms, from the relatively simple, such as someone inserting a USB device loaded with malicious software into a server, to the highly sophisticated, such as embedding a hardware Trojan during the manufacturing process of a microchip (discussed later in greater detail in the Supply Chain section). ML systems are susceptible to this attack because they rely on the accuracy and integrity of their hardware to process and analyze vast amounts of data correctly.

Consider an ML-powered drone used for geographical mapping. The drone’s operation relies on a series of onboard systems, including a navigation module that processes inputs from various sensors to determine its path. If an attacker gains physical access to this drone, they could replace the genuine navigation module with a compromised one that includes a backdoor. This manipulated module could then alter the drone’s flight path to conduct surveillance over restricted areas or even smuggle contraband by flying undetected routes.

Another example is the physical tampering of biometric scanners used for access control in secure facilities. By introducing a modified sensor that transmits biometric data to an unauthorized receiver, an attacker can access personal identification data to authenticate individuals.

There are several ways that physical tampering can occur in ML hardware:

  • Manipulating sensors: Consider an autonomous vehicle equipped with cameras and LiDAR for environmental perception. A malicious actor could deliberately manipulate the physical alignment of these sensors to create occlusion zones or distort distance measurements. This could compromise object detection capabilities and potentially endanger vehicle occupants.

  • Hardware trojans: Malicious circuit modifications can introduce trojans designed to activate upon specific input conditions. For instance, an ML accelerator chip might operate as intended until encountering a predetermined trigger, at which point it behaves erratically.

  • Tampering with memory: Physically exposing and manipulating memory chips could allow the extraction of encrypted ML model parameters. Fault injection techniques can also corrupt model data to degrade accuracy.

  • Introducing backdoors: Gaining physical access to servers, an adversary could use hardware keyloggers to capture passwords and create backdoor accounts for persistent access. These could then be used to exfiltrate ML training data over time.

  • Supply chain attacks: Manipulating third-party hardware components or compromising manufacturing and shipping channels creates systemic vulnerabilities that are difficult to detect and remediate.

14.5.3 Fault-injection Attacks

By intentionally introducing faults into ML hardware, attackers can induce errors in the computational process, leading to incorrect outputs. This manipulation compromises the integrity of ML operations and can serve as a vector for further exploitation, such as system reverse engineering or security protocol bypass. Fault injection involves deliberately disrupting standard computational operations in a system through external interference (Joye and Tunstall 2012). By precisely triggering computational errors, adversaries can alter program execution in ways that degrade reliability or leak sensitive information.

Joye, Marc, and Michael Tunstall. 2012. Fault Analysis in Cryptography. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-29656-7.
Barenghi, Alessandro, Guido M. Bertoni, Luca Breveglieri, Mauro Pellicioli, and Gerardo Pelosi. 2010. “Low Voltage Fault Attacks to AES.” In 2010 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), 7–12. IEEE; IEEE. https://doi.org/10.1109/hst.2010.5513121.
Hutter, Michael, Jorn-Marc Schmidt, and Thomas Plos. 2009. “Contact-Based Fault Injections and Power Analysis on RFID Tags.” In 2009 European Conference on Circuit Theory and Design, 409–12. IEEE; IEEE. https://doi.org/10.1109/ecctd.2009.5275012.
Amiel, Frederic, Christophe Clavier, and Michael Tunstall. 2006. “Fault Analysis of DPA-Resistant Algorithms.” In International Workshop on Fault Diagnosis and Tolerance in Cryptography, 223–36. Springer.
Agrawal, Dakshi, Selcuk Baktir, Deniz Karakoyunlu, Pankaj Rohatgi, and Berk Sunar. 2007. Trojan Detection Using IC Fingerprinting.” In 2007 IEEE Symposium on Security and Privacy (SP ’07), 29–45. Springer; IEEE. https://doi.org/10.1109/sp.2007.36.
Skorobogatov, Sergei. 2009. “Local Heating Attacks on Flash Memory Devices.” In 2009 IEEE International Workshop on Hardware-Oriented Security and Trust, 1–6. IEEE; IEEE. https://doi.org/10.1109/hst.2009.5225028.
Skorobogatov, Sergei P, and Ross J Anderson. 2003. “Optical Fault Induction Attacks.” In Cryptographic Hardware and Embedded Systems-CHES 2002: 4th International Workshop Redwood Shores, CA, USA, August 1315, 2002 Revised Papers 4, 2–12. Springer.

Various physical tampering techniques can be used for fault injection. Low voltage (Barenghi et al. 2010), power spikes (Hutter, Schmidt, and Plos 2009), clock glitches (Amiel, Clavier, and Tunstall 2006), electromagnetic pulses (Agrawal et al. 2007), temperate increase (S. Skorobogatov 2009) and laser strikes (S. P. Skorobogatov and Anderson 2003) are common hardware attack vectors. They are precisely timed to induce faults like flipped bits or skipped instructions during critical operations.

For ML systems, consequences include impaired model accuracy, denial of service, extraction of private training data or model parameters, and reverse engineering of model architectures. Attackers could use fault injection to force misclassifications, disrupt autonomous systems, or steal intellectual property.

For example, Breier et al. (2018) successfully injected a fault attack into a deep neural network deployed on a microcontroller. They used a laser to heat specific transistors, forcing them to switch states. In one instance, they used this method to attack a ReLU activation function, resulting in the function always outputting a value of 0, regardless of the input. In the assembly code shown in Figure 14.3, the attack caused the executing program always to skip the jmp end instruction on line 6. This means that HiddenLayerOutput[i] is always set to 0, overwriting any values written to it on lines 4 and 5. As a result, the targeted neurons are rendered inactive, resulting in misclassifications.

Figure 14.3: Fault-injection demonstrated with assembly code. Source: Breier et al. (2018).
Breier, Jakub, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. 2018. “Deeplaser: Practical Fault Attack on Deep Neural Networks.” ArXiv Preprint abs/1806.05859. https://arxiv.org/abs/1806.05859.

An attacker’s strategy could be to infer information about the activation functions using side-channel attacks (discussed next). Then, the attacker could attempt to target multiple activation function computations by randomly injecting faults into the layers as close to the output layer as possible, increasing the likelihood and impact of the attack.

Embedded devices are particularly vulnerable due to limited physical hardening and resource constraints that restrict robust runtime defenses. Without tamper-resistant packaging, attacker access to system buses and memory enables precise fault strikes. Lightweight embedded ML models also lack redundancy to overcome errors.

These attacks can be particularly insidious because they bypass traditional software-based security measures, often not accounting for physical disruptions. Furthermore, because ML systems rely heavily on the accuracy and reliability of their hardware for tasks like pattern recognition, decision-making, and automated responses, any compromise in their operation due to fault injection can have severe and wide-ranging consequences.

Mitigating fault injection risks necessitates a multilayer approach. Physical hardening through tamper-proof enclosures and design obfuscation helps reduce access. Lightweight anomaly detection can identify unusual sensor inputs or erroneous model outputs (Hsiao et al. 2023). Error-correcting memories minimize disruption, while data encryption safeguards information. Emerging model watermarking techniques trace stolen parameters.

Hsiao, Yu-Shun, Zishen Wan, Tianyu Jia, Radhika Ghosal, Abdulrahman Mahmoud, Arijit Raychowdhury, David Brooks, Gu-Yeon Wei, and Vijay Janapa Reddi. 2023. MAVFI: An End-to-End Fault Analysis Framework with Anomaly Detection and Recovery for Micro Aerial Vehicles.” In 2023 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 1–6. IEEE; IEEE. https://doi.org/10.23919/date56975.2023.10137246.

However, balancing robust protections with embedded systems’ tight size and power limits remains challenging. Cryptography limits and lack of secure co-processors on cost-sensitive embedded hardware restrict options. Ultimately, fault injection resilience demands a cross-layer perspective spanning electrical, firmware, software, and physical design layers.

14.5.4 Side-Channel Attacks

Side-channel attacks constitute a class of security breaches that exploit information inadvertently revealed through the physical implementation of computing systems. In contrast to direct attacks targeting software or network vulnerabilities, these attacks leverage the system’s inherent hardware characteristics to extract sensitive information.

The fundamental premise of a side-channel attack is that a device’s operation can inadvertently reveal information. Such leaks can come from various sources, including the electrical power a device consumes (Kocher, Jaffe, and Jun 1999), the electromagnetic fields it emits (Gandolfi, Mourtel, and Olivier 2001), the time it takes to process certain operations, or even the sounds it produces. Each channel can indirectly glimpse the system’s internal processes, revealing information that can compromise security.

Kocher, Paul, Joshua Jaffe, and Benjamin Jun. 1999. “Differential Power Analysis.” In Advances in CryptologyCRYPTO’99: 19th Annual International Cryptology Conference Santa Barbara, California, USA, August 1519, 1999 Proceedings 19, 388–97. Springer.
Gandolfi, Karine, Christophe Mourtel, and Francis Olivier. 2001. “Electromagnetic Analysis: Concrete Results.” In Cryptographic Hardware and Embedded SystemsCHES 2001: Third International Workshop Paris, France, May 1416, 2001 Proceedings 3, 251–61. Springer.
Kocher, Paul, Joshua Jaffe, Benjamin Jun, and Pankaj Rohatgi. 2011. “Introduction to Differential Power Analysis.” Journal of Cryptographic Engineering 1 (1): 5–27. https://doi.org/10.1007/s13389-011-0006-y.

For instance, consider a machine learning system performing encrypted transactions. Encryption algorithms are supposed to secure data but require computational work to encrypt and decrypt information. An attacker can analyze the power consumption patterns of the device performing encryption to figure out the cryptographic key. With sophisticated statistical methods, small variations in power usage during the encryption process can be correlated with the data being processed, eventually revealing the key. Some differential analysis attack techniques are Differential Power Analysis (DPA) (Kocher et al. 2011), Differential Electromagnetic Analysis (DEMA), and Correlation Power Analysis (CPA).

For example, consider an attacker trying to break the AES encryption algorithm using a differential analysis attack. The attacker would first need to collect many power or electromagnetic traces (a trace is a record of consumptions or emissions) of the device while performing AES encryption.

Once the attacker has collected sufficient traces, they would use a statistical technique to identify correlations between the traces and the different values of the plaintext (original, unencrypted text) and ciphertext (encrypted text). These correlations would then be used to infer the value of a bit in the AES key and, eventually, the entire key. Differential analysis attacks are dangerous because they are low-cost, effective, and non-intrusive, allowing attackers to bypass algorithmic and hardware-level security measures. Compromises by these attacks are also hard to detect because they do not physically modify the device or break the encryption algorithm.

Below, a simplified visualization illustrates how analyzing the encryption device’s power consumption patterns can help extract information about the algorithm’s operations and, in turn, the secret data. Consider a device that takes a 5-byte password as input. The different voltage patterns measured while the encryption device performs operations on the input to authenticate the password will be analyzed and compared.

First, the power analysis of the device’s operations after entering a correct password is shown in the first picture in Figure 14.4. The dense blue graph outputs the encryption device’s voltage measurement. What is significant here is the comparison between the different analysis charts rather than the specific details of what is happening in each scenario.

Figure 14.4: Power analysis of an encryption device with a correct password. Source: Colin O’Flynn.

When an incorrect password is entered, the power analysis chart is shown in Figure 14.5. The first three bytes of the password are correct. As a result, the voltage patterns are very similar or identical between the two charts, up to and including the fourth byte. After the device processes the fourth byte, a mismatch between the secret key and the attempted input is determined. A change in the pattern at the transition point between the fourth and fifth bytes is noticed: the voltage increases (the current decreases) because the device has stopped processing the rest of the input.

Figure 14.5: Power analysis of an encryption device with a (partially) wrong password. Source: Colin O’Flynn.

Figure 14.6 describes another chart of a completely wrong password. After the device finishes processing the first byte, it determines that it is incorrect and stops further processing - the voltage goes up and the current down.

Figure 14.6: Power analysis of an encryption device with a wrong password. Source: Colin O’Flynn.

The example above demonstrates how information about the encryption process and the secret key can be inferred by analyzing different inputs and attempting to ‘eavesdrop’ on the device’s operations on each input byte. For a more detailed explanation, watch Video 14.3 below.

Video 14.3: Power Attack

Another example is an ML system for speech recognition, which processes voice commands to perform actions. By measuring the latency for the system to respond to commands or the power used during processing, an attacker could infer what commands are being processed and thus learn about the system’s operational patterns. Even more subtly, the sound emitted by a computer’s fan or hard drive could change in response to the workload, which a sensitive microphone could pick up and analyze to determine what kind of operations are being performed.

In real-world scenarios, side-channel attacks have effectively extracted encryption keys and compromised secure communications. One of the earliest recorded instances of such an attack occurred in the 1960s when the British intelligence agency MI5 confronted the challenge of deciphering encrypted communications from the Egyptian Embassy in London. Their cipher-breaking efforts were initially thwarted by the computational limitations of the time until an ingenious observation by MI5 agent Peter Wright altered the course of the operation.

MI5 agent Peter Wright proposed using a microphone to capture the subtle acoustic signatures emitted from the embassy’s rotor cipher machine during encryption (Burnet and Thomas 1989). The distinct mechanical clicks of the rotors as operators configured them daily leaked critical information about the initial settings. This simple side channel of sound enabled MI5 to reduce the complexity of deciphering messages dramatically. This early acoustic leak attack highlights that side-channel attacks are not merely a digital age novelty but a continuation of age-old cryptanalytic principles. The notion that where there is a signal, there is an opportunity for interception remains foundational. From mechanical clicks to electrical fluctuations and beyond, side channels enable adversaries to extract secrets indirectly through careful signal analysis.

Burnet, David, and Richard Thomas. 1989. “Spycatcher: The Commodification of Truth.” J. Law Soc. 16 (2): 210. https://doi.org/10.2307/1410360.
Asonov, D., and R. Agrawal. 2004. “Keyboard Acoustic Emanations.” In IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004, 3–11. IEEE; IEEE. https://doi.org/10.1109/secpri.2004.1301311.
Gnad, Dennis R. E., Fabian Oboril, and Mehdi B. Tahoori. 2017. “Voltage Drop-Based Fault Attacks on FPGAs Using Valid Bitstreams.” In 2017 27th International Conference on Field Programmable Logic and Applications (FPL), 1–7. IEEE; IEEE. https://doi.org/10.23919/fpl.2017.8056840.
Zhao, Mark, and G. Edward Suh. 2018. FPGA-Based Remote Power Side-Channel Attacks.” In 2018 IEEE Symposium on Security and Privacy (SP), 229–44. IEEE; IEEE. https://doi.org/10.1109/sp.2018.00049.

Today, acoustic cryptanalysis has evolved into attacks like keyboard eavesdropping (Asonov and Agrawal 2004). Electrical side channels range from power analysis on cryptographic hardware (Gnad, Oboril, and Tahoori 2017) to voltage fluctuations (Zhao and Suh 2018) on machine learning accelerators. Timing, electromagnetic emission, and even heat footprints can likewise be exploited. New and unexpected side channels often emerge as computing becomes more interconnected and miniaturized.

Just as MI5’s analog acoustic leak transformed their codebreaking, modern side-channel attacks circumvent traditional boundaries of cyber defense. Understanding the creative spirit and historical persistence of side channel exploits is key knowledge for developers and defenders seeking to secure modern machine learning systems comprehensively against digital and physical threats.

14.5.5 Leaky Interfaces

Leaky interfaces in embedded systems are often overlooked backdoors that can become significant security vulnerabilities. While designed for legitimate purposes such as communication, maintenance, or debugging, these interfaces may inadvertently provide attackers with a window through which they can extract sensitive information or inject malicious data.

An interface becomes “leaky” when it exposes more information than it should, often due to a lack of stringent access controls or inadequate shielding of the transmitted data. Here are some real-world examples of leaky interface issues causing security problems in IoT and embedded devices:

  • Baby Monitors: Many WiFi-enabled baby monitors have been found to have unsecured interfaces for remote access. This allowed attackers to gain live audio and video feeds from people’s homes, representing a major privacy violation.

  • Pacemakers: Interface vulnerabilities were discovered in some pacemakers that could allow attackers to manipulate cardiac functions if exploited. This presents a potentially life-threatening scenario.

  • Smart Lightbulbs: A researcher found he could access unencrypted data from smart lightbulbs via a debug interface, including WiFi credentials, allowing him to gain access to the connected network (Greengard 2015).

  • Smart Cars: If left unsecured, The OBD-II diagnostic port has been shown to provide an attack vector into automotive systems. Attackers could use it to control brakes and other components (Miller and Valasek 2015).

Greengard, Samuel. 2015. The Internet of Things. The MIT Press. https://doi.org/10.7551/mitpress/10277.001.0001.
Miller, Charlie, and Chris Valasek. 2015. “Remote Exploitation of an Unaltered Passenger Vehicle.” Black Hat USA 2015 (S 91): 1–91.

While the above are not directly connected with ML, consider the example of a smart home system with an embedded ML component that controls home security based on behavior patterns it learns over time. The system includes a maintenance interface accessible via the local network for software updates and system checks. If this interface does not require strong authentication or the data transmitted through it is not encrypted, an attacker on the same network could gain access. They could then eavesdrop on the homeowner’s daily routines or reprogram the security settings by manipulating the firmware.

Such leaks are a privacy issue and a potential entry point for more damaging exploits. The exposure of training data, model parameters, or ML outputs from a leak could help adversaries construct adversarial examples or reverse-engineer models. Access through a leaky interface could also be used to alter an embedded device’s firmware, loading it with malicious code that could turn off the device, intercept data, or use it in botnet attacks.

A multi-layered approach is necessary to mitigate these risks, spanning technical controls like authentication, encryption, anomaly detection, policies and processes like interface inventories, access controls, auditing, and secure development practices. Turning off unnecessary interfaces and compartmentalizing risks via a zero-trust model provide additional protection.

As designers of embedded ML systems, we should assess interfaces early in development and continually monitor them post-deployment as part of an end-to-end security lifecycle. Understanding and securing interfaces is crucial for ensuring the overall security of embedded ML.

14.5.6 Counterfeit Hardware

ML systems are only as reliable as the underlying hardware. In an era where hardware components are global commodities, the rise of counterfeit or cloned hardware presents a significant challenge. Counterfeit hardware encompasses any components that are unauthorized reproductions of original parts. Counterfeit components infiltrate ML systems through complex supply chains that stretch across borders and involve numerous stages from manufacture to delivery.

A single lapse in the supply chain’s integrity can result in the insertion of counterfeit parts designed to imitate the functions and appearance of genuine hardware closely. For instance, a facial recognition system for high-security access control may be compromised if equipped with counterfeit processors. These processors could fail to accurately process and verify biometric data, potentially allowing unauthorized individuals to access restricted areas.

The challenge with counterfeit hardware is multifaceted. It undermines the quality and reliability of ML systems, as these components may degrade faster or perform unpredictably due to substandard manufacturing. The security risks are also profound; counterfeit hardware can contain vulnerabilities ripe for exploitation by malicious actors. For example, a cloned network router in an ML data center might include a hidden backdoor, enabling data interception or network intrusion without detection.

Furthermore, counterfeit hardware poses legal and compliance risks. Companies inadvertently utilizing counterfeit parts in their ML systems may face serious legal repercussions, including fines and sanctions for failing to comply with industry regulations and standards. This is particularly true for sectors where compliance with specific safety and privacy regulations is mandatory, such as healthcare and finance.

Economic pressures to reduce costs exacerbate the issue of counterfeit hardware and compel businesses to source from lower-cost suppliers without stringent verification processes. This economizing can inadvertently introduce counterfeit parts into otherwise secure systems. Additionally, detecting these counterfeits is inherently tricky since they are created to pass as the original components, often requiring sophisticated equipment and expertise to identify.

In the field of ML, where real-time decisions and complex computations are the norm, the implications of hardware failure can be inconvenient and potentially dangerous. It is crucial for stakeholders to be fully aware of these risks. The challenges posed by counterfeit hardware call for a comprehensive understanding of the current threats to ML system integrity. This underscores the need for proactive, informed management of the hardware life cycle within these advanced systems.

14.5.7 Supply Chain Risks

The threat of counterfeit hardware is closely tied to broader supply chain vulnerabilities. Globalized, interconnected supply chains create multiple opportunities for compromised components to infiltrate a product’s lifecycle. Supply chains involve numerous entities, from design to manufacturing, assembly, distribution, and integration. A lack of transparency and oversight of each partner makes verifying integrity at every step challenging. Lapses anywhere along the chain can allow the insertion of counterfeit parts.

For example, a contracted manufacturer may unknowingly receive and incorporate recycled electronic waste containing dangerous counterfeits. An untrustworthy distributor could smuggle in cloned components. Insider threats at any vendor might deliberately mix counterfeits into legitimate shipments.

Once counterfeits enter the supply stream, they move quickly through multiple hands before ending up in ML systems where detection is difficult. Advanced counterfeits like refurbished parts or clones with repackaged externals can masquerade as authentic components, passing visual inspection.

To identify fakes, thorough technical profiling using micrography, X-ray screening, component forensics, and functional testing is often required. However, such costly analysis is impractical for large-volume procurement.

Strategies like supply chain audits, screening suppliers, validating component provenance, and adding tamper-evident protections can help mitigate risks. However, given global supply chain security challenges, a zero-trust approach is prudent. Designing ML systems to use redundant checking, fail-safes, and continuous runtime monitoring provides resilience against component compromises.

Rigorous validation of hardware sources coupled with fault-tolerant system architectures offers the most robust defense against the pervasive risks of convoluted, opaque global supply chains.

14.5.8 Case Study

In 2018, Bloomberg Businessweek published an alarming story that got much attention in the tech world. The article claimed that Supermicro had secretly planted tiny spy chips on server hardware. Reporters said Chinese state hackers working with Supermicro could sneak these tiny chips onto motherboards during manufacturing. The tiny chips allegedly gave the hackers backdoor access to servers used by over 30 major companies, including Apple and Amazon.

If true, this would allow hackers to spy on private data or even tamper with systems. However, after investigating, Apple and Amazon found no proof that such hacked Supermicro hardware existed. Other experts questioned whether the Bloomberg article was accurate reporting.

Whether the story is entirely accurate or not is not our concern from a pedagogical viewpoint. However, this incident drew attention to the risks of global supply chains for hardware primarily manufactured in China. When companies outsource and buy hardware components from vendors worldwide, there needs to be more visibility into the process. In this complex global pipeline, there are concerns that counterfeits or tampered hardware could be slipped in somewhere along the way without tech companies realizing it. Companies relying too much on single manufacturers or distributors creates risk. For instance, due to the over-reliance on TSMC for semiconductor manufacturing, the U.S. has invested 50 billion dollars into the CHIPS Act.

As ML moves into more critical systems, verifying hardware integrity from design through production and delivery is crucial. The reported Supermicro backdoor demonstrated that for ML security, we cannot take global supply chains and manufacturing for granted. We must inspect and validate hardware at every link in the chain.

14.6 Embedded ML Hardware Security

14.6.1 Trusted Execution Environments

About TEE

A Trusted Execution Environment (TEE) is a secure area within a host processor that ensures the safe execution of code and the protection of sensitive data. By isolating critical tasks from the operating system, TEEs resist software and hardware attacks, providing a secure environment for handling sensitive computations.

Benefits

TEEs are particularly valuable in scenarios where sensitive data must be processed or where the integrity of a system’s operations is critical. In the context of ML hardware, TEEs ensure that the ML algorithms and data are protected against tampering and leakage. This is essential because ML models often process private information, trade secrets, or data that could be exploited if exposed.

For instance, a TEE can protect ML model parameters from being extracted by malicious software on the same device. This protection is vital for privacy and maintaining the integrity of the ML system, ensuring that the models perform as expected and do not provide skewed outputs due to manipulated parameters. Apple’s Secure Enclave, found in iPhones and iPads, is a form of TEE that provides an isolated environment to protect sensitive user data and cryptographic operations.

Trusted Execution Environments (TEEs) are crucial for industries that demand high levels of security, including telecommunications, finance, healthcare, and automotive. TEEs protect the integrity of 5G networks in telecommunications and support critical applications. In finance, they secure mobile payments and authentication processes. Healthcare relies on TEEs to safeguard sensitive patient data, while the automotive industry depends on them for the safety and reliability of autonomous systems. Across all sectors, TEEs ensure the confidentiality and integrity of data and operations.

In ML systems, TEEs can:

  • Securely perform model training and inference, ensuring the computation results remain confidential.

  • Protect the confidentiality of input data, like biometric information, used for personal identification or sensitive classification tasks.

  • Secure ML models by preventing reverse engineering, which can protect proprietary information and maintain a competitive advantage.

  • Enable secure updates to ML models, ensuring that updates come from a trusted source and have not been tampered with in transit.

  • Strengthen network security by safeguarding data transmission between distributed ML components through encryption and secure in-TEE processing.

The importance of TEEs in ML hardware security stems from their ability to protect against external and internal threats, including the following:

  • Malicious Software: TEEs can prevent high-privilege malware from accessing sensitive areas of the ML system.

  • Physical Tampering: By integrating with hardware security measures, TEEs can protect against physical tampering that attempts to bypass software security.

  • Side-channel Attacks: Although not impenetrable, TEEs can mitigate specific side-channel attacks by controlling access to sensitive operations and data patterns.

  • Network Threats: TEEs enhance network security by safeguarding data transmission between distributed ML components through encryption and secure in-TEE processing. This effectively prevents man-in-the-middle attacks and ensures data is transmitted through trusted channels.

Mechanics

The fundamentals of TEEs contain four main parts:

  • Isolated Execution: Code within a TEE runs in a separate environment from the host device’s host operating system. This isolation protects the code from unauthorized access by other applications.

  • Secure Storage: TEEs can securely store cryptographic keys, authentication tokens, and sensitive data, preventing regular applications from accessing them outside the TEE.

  • Integrity Protection: TEEs can verify the integrity of code and data, ensuring that they have not been altered before execution or during storage.

  • Data Encryption: Data handled within a TEE can be encrypted, making it unreadable to entities without the proper keys, which are also managed within the TEE.

Here are some examples of TEEs that provide hardware-based security for sensitive applications:

  • ARMTrustZone: This technology creates secure and normal world execution environments isolated using hardware controls and implemented in many mobile chipsets.

  • IntelSGX: Intel’s Software Guard Extensions provide an enclave for code execution that protects against various software-based threats, specifically targeting O.S. layer vulnerabilities. They are used to safeguard workloads in the cloud.

  • Qualcomm Secure Execution Environment: A Hardware sandbox on Qualcomm chipsets for mobile payment and authentication apps.

  • Apple SecureEnclave: A TEE for biometric data and cryptographic key management on iPhones and iPads, facilitating secure mobile payments.

Figure 14.7 is a diagram demonstrating a secure enclave isolated from the host processor to provide an extra layer of security. The secure enclave has a boot ROM to establish a hardware root of trust, an AES engine for efficient and secure cryptographic operations, and protected memory. It also has a mechanism to store information securely on attached storage separate from the NAND flash storage used by the application processor and operating system. This design keeps sensitive user data secure even when the Application Processor kernel becomes compromised.

Figure 14.7: System-on-chip secure enclave. Source: Apple.

Tradeoffs

While Trusted Execution Environments offer significant security benefits, their implementation involves trade-offs. Several factors influence whether a system includes a TEE:

Cost: Implementing TEEs involves additional costs. There are direct costs for the hardware and indirect costs associated with developing and maintaining secure software for TEEs. These costs may only be justifiable for some devices, especially low-margin products.

Complexity: TEEs add complexity to system design and development. Integrating a TEE with existing systems requires a substantial redesign of the hardware and software stack, which can be a barrier, especially for legacy systems.

Performance Overhead: TEEs may introduce performance overhead due to the additional steps involved in encryption and data verification, which could slow down time-sensitive applications.

Development Challenges: Developing for TEEs requires specialized knowledge and often must adhere to strict development protocols. This can extend development time and complicate the debugging and testing processes.

Scalability and Flexibility: TEEs, due to their protected nature, may impose limitations on scalability and flexibility. Upgrading protected components or scaling the system for more users or data can be more challenging when everything must pass through a secure, enclosed environment.

Energy Consumption: The increased processing required for encryption, decryption, and integrity checks can lead to higher energy consumption, a significant concern for battery-powered devices.

Market Demand: Not all markets or applications require the level of security provided by TEEs. For many consumer applications, the perceived risk may be low enough that manufacturers opt not to include TEEs in their designs.

Security Certification and Assurance: Systems with TEEs may need rigorous security certifications with bodies like Common Criteria (CC) or the European Union Agency for Cybersecurity (ENISA), which can be lengthy and expensive. Some organizations may choose to refrain from implementing TEEs to avoid these hurdles.

Limited Resource Devices: Devices with limited processing power, memory, or storage may only support TEEs without compromising their primary functionality.

14.6.2 Secure Boot

About

A Secure Boot is a fundamental security standard that ensures a device only boots using software trusted by the Original Equipment Manufacturer (OEM). During startup, the firmware checks the digital signature of each boot software component, including the bootloader, kernel, and base operating system. This process verifies that the software has not been altered or tampered with. If any signature fails verification, the boot process is halted to prevent unauthorized code execution that could compromise the system’s security integrity.

Benefits

The integrity of an embedded machine learning (ML) system is paramount from the moment it is powered on. Any compromise in the boot process can lead to the execution of malicious software before the operating system and ML applications begin, resulting in manipulated ML operations, unauthorized data access, or repurposing the device for malicious activities such as botnets or crypto-mining.

Secure Boot offers vital protections for embedded ML hardware through the following critical mechanisms:

  • Protecting ML Data: Ensuring that the data used by ML models, which may include private or sensitive information, is not exposed to tampering or theft during the boot process.

  • Guarding Model Integrity: Maintaining the integrity of the ML models is crucial, as tampering with them could lead to incorrect or malicious outcomes.

  • Secure Model Updates: Enabling secure updates to ML models and algorithms, ensuring that updates are authenticated and have not been altered.

Mechanics

Secure Boot works with TEEs to further enhance system security. Figure 14.8 illustrates a flow diagram of a trusted embedded system. In the initial validation phase, Secure Boot verifies that the code running within the TEE is the correct, untampered version authorized by the device manufacturer. By checking digital signatures of the firmware and other critical system components, Secure Boot prevents unauthorized modifications that could compromise the TEE’s security capabilities. This establishes a foundation of trust upon which the TEE can securely execute sensitive operations such as cryptographic key management and secure data processing. By enforcing these layers of security, Secure Boot enables resilient and secure device operations in even the most resource-constrained environments.

Figure 14.8: Secure Boot flow. Source: R. V. and A. (2018).
R. V., Rashmi, and Karthikeyan A. 2018. “Secure Boot of Embedded Applications - a Review.” In 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), 291–98. IEEE. https://doi.org/10.1109/iceca.2018.8474730.

Case Study: Apple’s Face ID

A real-world example of Secure Boot’s application can be observed in Apple’s Face ID technology, which uses advanced machine learning algorithms to enable facial recognition on iPhones and iPads. Face ID relies on a sophisticated integration of sensors and software to precisely map the geometry of a user’s face. For Face ID to operate securely and protect users’ biometric data, the device’s operations must be trustworthy from initialization. This is where Secure Boot plays a pivotal role. The following outlines how Secure Boot functions in conjunction with Face ID:

Initial Verification: Upon booting up an iPhone, the Secure Boot process commences within the Secure Enclave, a specialized coprocessor designed to add an extra layer of security. The Secure Enclave handles biometric data, such as fingerprints for Touch ID and facial recognition data for Face ID. During the boot process, the system rigorously verifies that Apple has digitally signed the Secure Enclave’s firmware, guaranteeing its authenticity. This verification step ensures that the firmware used to process biometric data remains secure and uncompromised.

Continuous Security Checks: Following the system’s initialization and validation by Secure Boot, the Secure Enclave communicates with the device’s central processor to maintain a secure boot chain. During this process, the digital signatures of the iOS kernel and other critical boot components are meticulously verified to ensure their integrity before proceeding. This “chain of trust” model effectively prevents unauthorized modifications to the bootloader and operating system, safeguarding the device’s overall security.

Face Data Processing: Once the secure boot sequence is completed, the Secure Enclave interacts securely with the machine learning algorithms that power Face ID. Facial recognition involves projecting and analyzing over 30,000 invisible points to create a depth map of the user’s face and an infrared image. This data is converted into a mathematical representation and is securely compared with the registered face data stored in the Secure Enclave.

Secure Enclave and Data Protection: The Secure Enclave is precisely engineered to protect sensitive data and manage cryptographic operations that safeguard this data. Even in the event of a compromised operating system kernel, the facial data processed through Face ID remains inaccessible to unauthorized applications or external attackers. Importantly, Face ID data is never transmitted off the device and is not stored on iCloud or other external servers.

Firmware Updates: Apple frequently releases updates to address security vulnerabilities and enhance system functionality. Secure Boot ensures that all firmware updates are authenticated, allowing only those signed by Apple to be installed. This process helps preserve the integrity and security of the Face ID system over time.

By integrating Secure Boot with dedicated hardware such as the Secure Enclave, Apple delivers robust security guarantees for critical operations like facial recognition.

Challenges

Despite its benefits, implementing Secure Boot presents several challenges, particularly in complex and large-scale deployments: Key Management Complexity: Generating, storing, distributing, rotating, and revoking cryptographic keys provably securely is particularly challenging yet vital for maintaining the chain of trust. Any compromise of keys cripples protections. Large enterprises managing multitudes of device keys face particular scale challenges.

Performance Overhead: Checking cryptographic signatures during Boot can add 50-100ms or more per component verified. This delay may be prohibitive for time-sensitive or resource-constrained applications. However, performance impacts can be reduced through parallelization and hardware acceleration.

Signing Burden: Developers must diligently ensure that all software components involved in the boot process - bootloaders, firmware, OS kernel, drivers, applications, etc. are correctly signed by trusted keys. Accommodating third-party code signing remains an issue.

Cryptographic Verification: Secure algorithms and protocols must validate the legitimacy of keys and signatures, avoid tampering or bypass, and support revocation. Accepting dubious keys undermines trust.

Customizability Constraints: Vendor-locked Secure Boot architectures limit user control and upgradability. Open-source bootloaders like u-boot and coreboot enable security while supporting customizability.

Scalable Standards: Emerging standards like Device Identifier Composition Engine (DICE) and IDevID promise to securely provision and manage device identities and keys at scale across ecosystems.

Adopting Secure Boot requires following security best practices around key management, crypto validation, signed updates, and access control. Secure Boot provides a robust foundation for building device integrity and trust when implemented with care.

14.6.3 Hardware Security Modules

About HSM

A Hardware Security Module (HSM) is a physical device that manages digital keys for strong authentication and provides crypto-processing. These modules are designed to be tamper-resistant and provide a secure environment for performing cryptographic operations. HSMs can come in standalone devices, plug-in cards, or integrated circuits on another device.

HSMs are crucial for various security-sensitive applications because they offer a hardened, secure enclave for storing cryptographic keys and executing cryptographic functions. They are particularly important for ensuring the security of transactions, identity verifications, and data encryption.

Benefits

HSMs provide several functionalities that are beneficial for the security of ML systems:

Protecting Sensitive Data: In machine learning applications, models often process sensitive data that can be proprietary or personal. HSMs protect the encryption keys used to secure this data, both at rest and in transit, from exposure or theft.

Ensuring Model Integrity: The integrity of ML models is vital for their reliable operation. HSMs can securely manage the signing and verification processes for ML software and firmware, ensuring unauthorized parties have not altered the models.

Secure Model Training and Updates: The training and updating of ML models involve the processing of potentially sensitive data. HSMs ensure that these processes are conducted within a secure cryptographic boundary, protecting against the exposure of training data and unauthorized model updates.

Tradeoffs

HSMs involve several tradeoffs for embedded ML. These tradeoffs are similar to TEEs, but for completeness, we will also discuss them here through the lens of HSM.

Cost: HSMs are specialized devices that can be expensive to procure and implement, raising the overall cost of an ML project. This may be a significant factor for embedded systems, where cost constraints are often stricter.

Performance Overhead: While secure, the cryptographic operations performed by HSMs can introduce latency. Any added delay can be critical in high-performance embedded ML applications where inference must happen in real-time, such as in autonomous vehicles or translation devices.

Physical Space: Embedded systems are often limited by physical space, and adding an HSM can be challenging in tightly constrained environments. This is especially true for consumer electronics and wearable technology, where size and form factor are key considerations.

Power Consumption: HSMs require power for their operation, which can be a drawback for battery-operated devices with long battery life. The secure processing and cryptographic operations can drain the battery faster, a significant tradeoff for mobile or remote embedded ML applications.

Complexity in Integration: Integrating HSMs into existing hardware systems adds complexity. It often requires specialized knowledge to manage the secure communication between the HSM and the system’s processor and develop software capable of interfacing with the HSM.

Scalability: Scaling an ML solution that uses HSMs can be challenging. Managing a fleet of HSMs and ensuring uniformity in security practices across devices can become complex and costly when the deployment size increases, especially when dealing with embedded systems where communication is costly.

Operational Complexity: HSMs can make updating firmware and ML models more complex. Every update must be signed and possibly encrypted, which adds steps to the update process and may require secure mechanisms for key management and update distribution.

Development and Maintenance: The secure nature of HSMs means that only limited personnel have access to the HSM for development and maintenance purposes. This can slow down the development process and make routine maintenance more difficult.

Certification and Compliance: Ensuring that an HSM meets specific industry standards and compliance requirements can add to the time and cost of development. This may involve undergoing rigorous certification processes and audits.

14.6.4 Physical Unclonable Functions (PUFs)

About

Physical Unclonable Functions (PUFs) provide a hardware-intrinsic means for cryptographic key generation and device authentication by harnessing the inherent manufacturing variability in semiconductor components. During fabrication, random physical factors such as doping variations, line edge roughness, and dielectric thickness result in microscale differences between semiconductors, even when produced from the same masks. These create detectable timing and power variances that act as a “fingerprint” unique to each chip. PUFs exploit this phenomenon by incorporating integrated circuits to amplify minute timing or power differences into measurable digital outputs.

When stimulated with an input challenge, the PUF circuit produces an output response based on the device’s intrinsic physical characteristics. Due to their physical uniqueness, the same challenge will yield a different response on other devices. This challenge-response mechanism can be used to generate keys securely and identifiers tied to the specific hardware, perform device authentication, or securely store secrets. For example, a key derived from a PUF will only work on that device and cannot be cloned or extracted even with physical access or full reverse engineering (Gao, Al-Sarawi, and Abbott 2020).

Benefits

PUF key generation avoids external key storage, which risks exposure. It also provides a foundation for other hardware security primitives like Secure Boot. Implementation challenges include managing varying reliability and entropy across different PUFs, sensitivity to environmental conditions, and susceptibility to machine learning modeling attacks. When designed carefully, PUFs enable promising applications in IP protection, trusted computing, and anti-counterfeiting.

Utility

Machine learning models are rapidly becoming a core part of the functionality for many embedded devices, such as smartphones, smart home assistants, and autonomous drones. However, securing ML on resource-constrained embedded hardware can be challenging. This is where physical unclonable functions (PUFs) come in uniquely handy. Let’s look at some examples of how PUFs can be useful.

PUFs provide a way to generate unique fingerprints and cryptographic keys tied to the physical characteristics of each chip on the device. Let’s take an example. We have a smart camera drone that uses embedded ML to track objects. A PUF integrated into the drone’s processor could create a device-specific key to encrypt the ML model before loading it onto the drone. This way, even if an attacker somehow hacks the drone and tries to steal the model, they won’t be able to use it on another device!

The same PUF key could also create a digital watermark embedded in the ML model. If that model ever gets leaked and posted online by someone trying to pirate it, the watermark could help prove it came from your stolen drone and didn’t originate from the attacker. Also, imagine the drone camera connects to the cloud to offload some of its ML processing. The PUF can authenticate that the camera is legitimate before the cloud will run inference on sensitive video feeds. The cloud could verify that the drone has not been physically tampered with by checking that the PUF responses have not changed.

PUFs enable all this security through their challenge-response behavior’s inherent randomness and hardware binding. Without needing to store keys externally, PUFs are ideal for securing embedded ML with limited resources. Thus, they offer a unique advantage over other mechanisms.

Mechanics

The working principle behind PUFs, shown in Figure 14.9, involves generating a “challenge-response” pair, where a specific input (the challenge) to the PUF circuit results in an output (the response) that is determined by the unique physical properties of that circuit. This process can be likened to a fingerprinting mechanism for electronic devices. Devices that use ML for processing sensor data can employ PUFs to secure communication between devices and prevent the execution of ML models on counterfeit hardware.

Figure 14.9 illustrates an overview of the PUF basics: a) PUF can be thought of as a unique fingerprint for each piece of hardware; b) an Optical PUF is a special plastic token that is illuminated, creating a unique speckle pattern that is then recorded; c) in an APUF (Arbiter PUF), challenge bits select different paths, and a judge decides which one is faster, giving a response of ‘1’ or ‘0’; d) in an SRAM PUF, the response is determined by the mismatch in the threshold voltage of transistors, where certain conditions lead to a preferred response of ‘1’. Each of these methods uses specific characteristics of the hardware to create a unique identifier.

Figure 14.9: PUF basics. Source: Gao, Al-Sarawi, and Abbott (2020).
Gao, Yansong, Said F. Al-Sarawi, and Derek Abbott. 2020. “Physical Unclonable Functions.” Nature Electronics 3 (2): 81–91. https://doi.org/10.1038/s41928-020-0372-5.

Challenges

There are a few challenges with PUFs. The PUF response can be sensitive to environmental conditions, such as temperature and voltage fluctuations, leading to inconsistent behavior that must be accounted for in the design. Also, since PUFs can generate many unique challenge-response pairs, managing and ensuring the consistency of these pairs across the device’s lifetime can be challenging. Last but not least, integrating PUF technology may increase the overall manufacturing cost of a device, although it can save costs in key management over the device’s lifecycle.

14.7 Privacy Concerns in Data Handling

Handling personal and sensitive data securely and ethically is critical as machine learning permeates devices like smartphones, wearables, and smart home appliances. For medical hardware, handling data securely and ethically is further required by law through the Health Insurance Portability and Accountability Act (HIPAA). These embedded ML systems pose unique privacy risks, given their intimate proximity to users’ lives.

14.7.1 Sensitive Data Types

Embedded ML devices like wearables, smart home assistants, and autonomous vehicles frequently process highly personal data that requires careful handling to maintain user privacy and prevent misuse. Specific examples include medical reports and treatment plans processed by health wearables, private conversations continuously captured by smart home assistants, and detailed driving habits collected by connected cars. Compromise of such sensitive data can lead to serious consequences like identity theft, emotional manipulation, public shaming, and mass surveillance overreach.

Sensitive data takes many forms - structured records like contact lists and unstructured content like conversational audio and video streams. In medical settings, protected health information (PHI) is collected by doctors throughout every interaction and is heavily regulated by strict HIPAA guidelines. Even outside of medical settings, sensitive data can still be collected in the form of Personally Identifiable Information (PII), which is defined as “any representation of information that permits the identity of an individual to whom the information applies to be reasonably inferred by either direct or indirect means.” Examples of PII include email addresses, social security numbers, and phone numbers, among other fields. PII is collected in medical settings and other settings (financial applications, etc) and is heavily regulated by Department of Labor policies.

Even derived model outputs could indirectly leak details about individuals. Beyond just personal data, proprietary algorithms and datasets also warrant confidentiality protections. In the Data Engineering section, we covered several topics in detail.

Techniques like de-identification, aggregation, anonymization, and federation can help transform sensitive data into less risky forms while retaining analytical utility. However, diligent controls around access, encryption, auditing, consent, minimization, and compliance practices are still essential throughout the data lifecycle. Regulations like GDPR categorize different classes of sensitive data and prescribe responsibilities around their ethical handling. Standards like NIST 800-53 provide rigorous security control guidance for confidentiality protection. With growing reliance on embedded ML, understanding sensitive data risks is crucial.

14.7.2 Applicable Regulations

Many embedded ML applications handle sensitive user data under HIPAA, GDPR, and CCPA regulations. Understanding the protections mandated by these laws is crucial for building compliant systems.

  • HIPAA Privacy Rule establishes care providers that conduct certain governs medical data privacy and security in the US, with severe penalties for violations. Any health-related embedded ML devices like diagnostic wearables or assistive robots would need to implement controls like audit trails, access controls, and encryption prescribed by HIPAA.

  • GDPR imposes transparency, retention limits, and user rights on EU citizen data, even when processed by companies outside the EU. Smart home systems capturing family conversations or location patterns would need GDPR compliance. Key requirements include data minimization, encryption, and mechanisms for consent and erasure.

  • CCPA, which applies in California, protects consumer data privacy through provisions like required disclosures and opt-out rights—ioT gadgets like smart speakers and fitness trackers Californians use likely to fall under its scope.

  • The CCPA was the first state-specific set of regulations regarding privacy concerns. Following the CCPA, similar regulations were also enacted in 10 other states, with some states proposing bills for consumer data privacy protections.

Additionally, when relevant to the application, sector-specific rules govern telematics, financial services, utilities, etc. Best practices like Privacy by design, impact assessments, and maintaining audit trails help embed compliance if it is not already required by law. Given potentially costly penalties, consulting legal/compliance teams is advisable when developing regulated embedded ML systems.

14.7.3 De-identification

If medical data is de-identified thoroughly, HIPAA guidelines do not directly apply, and there are far fewer regulations. However, medical data needs to be de-identified using HIPAA methods (Safe Harbor methods or Expert Determination methods) for HIPAA guidelines to no longer apply.

Safe Harbor Methods

Safe Harbor methods are most commonly used for de-identifying protected healthcare information due to the limited resources needed compared to Expert Determination methods. Safe Harbor de-identification requires scrubbing datasets of any data that falls into one of 18 categories. The following categories are listed as sensitive information based on the Safe Harbor standard:

  • Name, Geographic locator, Birthdate, Phone Number, Email Address, addresses, Social Security Numbers, Medical Record Numbers, health beneficiary Numbers, Device Identifiers and Serial Numbers, Certificate/License Numbers (Birth Certificate, Drivers License, etc), Account Numbers, Vehicle Identifiers, Website URLs, FullFace Photos and Comparable Images, Biometric Identifiers, Any other unique identifiers

For most of these categories, all data must be removed regardless of the circumstances. For other categories, including geographical information and birthdate, the data can be partially removed enough to make the information hard to re-identify. For example, if a zip code is large enough, the first 3 digits can remain since there are enough people in the geographic area to make re-identification difficult. Birthdates need to be scrubbed of all elements except birth year, and all ages above 89 need to be aggregated into a 90+ category.

Expert Determination Methods

Safe Harbor methods work for several cases of medical data de-identification, though re-identification is still possible in some cases. For example, let’s say you collect data on a patient in an urban city with a large zip code, but you have documented a rare disease that they have—a disease that only 25 people have in the entire city. Given geographic data coupled with birth year, it is highly possible that someone can re-identify this individual, which is an extremely detrimental privacy breach.

In unique cases like these, expert determination data de-identification methods are preferred. Expert determination de-identification requires a “person with appropriate knowledge of and experience with generally accepted statistical and scientific principles and methods for rendering information not individually identifiable” to evaluate a dataset and determine if the risk of re-identification of individual data in a given dataset in combination with publicly available data (voting records, etc.), is extremely small.

Expert Determination de-identification is understandably harder to complete than Safe Harbour de-identification due to the cost and feasibility of accessing an expert to verify the likelihood of re-identifying a dataset. However, in many cases, expert determination is required to ensure that re-identification of data is extremely unlikely.

14.7.4 Data Minimization

Data minimization involves collecting, retaining, and processing only the necessary user data to reduce privacy risks from embedded ML systems. This starts by restricting the data types and instances gathered to the bare minimum required for the system’s core functionality. For example, an object detection model only collects the images needed for that specific computer vision task. Similarly, a voice assistant would limit audio capture to specific spoken commands rather than persistently recording ambient sounds.

Where possible, temporary data that briefly resides in memory without persisting storage provides additional minimization. A clear legal basis, like user consent, should be established for collection and retention. Sandboxing and access controls prevent unauthorized use beyond intended tasks. Retention periods should be defined based on purpose, with secure deletion procedures removing expired data.

Data minimization can be broken down into 3 categories:

  1. “Data must be adequate about the purpose that is pursued.” Data omission can limit the accuracy of models trained on the data and any general usefulness of a dataset. Data minimization requires a minimum amount of data to be collected from users while creating a dataset that adds value to others.

  2. The data collected from users must be relevant to the purpose of the data collection.

  3. Users’ data should be limited to only the necessary data to fulfill the purpose of the initial data collection. If similarly robust and accurate results can be obtained from a smaller dataset, any additional data beyond this smaller dataset should not be collected.

Emerging techniques like differential Privacy, federated learning, and synthetic data generation allow useful insights derived from less raw user data. Performing data flow mapping and impact assessments helps identify opportunities to minimize raw data usage.

Methodologies like Privacy by Design (Cavoukian 2009) consider such minimization early in system architecture. Regulations like GDPR also mandate data minimization principles. With a multilayered approach across legal, technical, and process realms, data minimization limits risks in embedded ML products.

Cavoukian, Ann. 2009. “Privacy by Design.” Office of the Information and Privacy Commissioner.

Case Study - Performance-Based Data Minimization

Performance-based data minimization (Biega et al. 2020) focuses on expanding upon the third category of data minimization mentioned above, namely limitation. It specifically defines the robustness of model results on a given dataset by certain performance metrics, such that data should not be additionally collected if it does not significantly improve performance. Performance metrics can be divided into two categories:

Biega, Asia J., Peter Potash, Hal Daumé, Fernando Diaz, and Michèle Finck. 2020. “Operationalizing the Legal Principle of Data Minimization for Personalization.” In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, edited by Jimmy Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu, 399–408. ACM. https://doi.org/10.1145/3397271.3401034.
  1. Global data minimization performance
  1. Satisfied if a dataset minimizes the amount of per-user data while its mean performance across all data is comparable to the mean performance of the original, unminimized dataset.
  1. Per user data minimization performance
  1. Satisfied if a dataset minimizes the amount of per-user data while the minimum performance of individual user data is comparable to that of individual user data in the original, unminimized dataset.

Performance-based data minimization can be leveraged in machine-learning settings, including movie recommendation algorithms and e-commerce settings.

Global data minimization is much more feasible than per-user data minimization, given the much more significant difference in per-user losses between the minimized and original datasets.

14.7.6 Privacy Concerns in Machine Learning

Generative AI

Privacy and security concerns have also risen with the public use of generative AI models, including OpenAI’s GPT4 and other LLMs. ChatGPT, in particular, has been discussed more recently about Privacy, given all the personal information collected from ChatGPT users. In June 2023, a class action lawsuit was filed against ChatGPT due to concerns that it was trained on proprietary medical and personal information without proper permissions or consent. As a result of these privacy concerns, many companies have prohibited their employees from accessing ChatGPT, and uploading private, company related information to the chatbot. Further, ChatGPT is susceptible to prompt injection and other security attacks that could compromise the privacy of the proprietary data upon which it was trained.

Case Study

While ChatGPT has instituted protections to prevent people from accessing private and ethically questionable information, several individuals have successfully bypassed these protections through prompt injection and other security attacks. As demonstrated in Figure 14.10, users can bypass ChatGPT protections to mimic the tone of a “deceased grandmother” to learn how to bypass a web application firewall (Gupta et al. 2023).

Figure 14.10: Grandma role play to bypass safety restrictions. Source: Gupta et al. (2023).

Further, users have also successfully used reverse psychology to manipulate ChatGPT and access information initially prohibited by the model. In Figure 14.11, a user is initially prevented from learning about piracy websites through ChatGPT but can bypass these restrictions using reverse psychology.

Figure 14.11: Reverse psychology to bypass safety restrictions. Source: Gupta et al. (2023).
Gupta, Maanak, Charankumar Akiri, Kshitiz Aryal, Eli Parker, and Lopamudra Praharaj. 2023. “From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy.” #IEEE_O_ACC# 11: 80218–45. https://doi.org/10.1109/access.2023.3300381.

The ease at which security attacks can manipulate ChatGPT is concerning, given the private information it was trained upon without consent. Further research on data privacy in LLMs and generative AI should focus on preventing the model from being so naive to prompt injection attacks.

Data Erasure

Many previous regulations mentioned above, including GDPR, include a “right to be forgotten” clause. This clause essentially states that “the data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay.” However, in several cases, even if user data has been erased from a platform, the data is only partially erased if a machine learning model has been trained on this data for separate purposes. Through methods similar to membership inference attacks, other individuals can still predict the training data a model was trained upon, even if the data’s presence was explicitly removed online.

One approach to addressing privacy concerns with machine learning training data has been through differential privacy methods. For example, by adding Laplacian noise in the training set, a model can be robust to membership inference attacks, preventing deleted data from being recovered. Another approach to preventing deleted data from being inferred from security attacks is simply retraining the model from scratch on the remaining data. Since this process is time-consuming and computationally expensive, other researchers have attempted to address privacy concerns surrounding inferring model training data through a process called machine unlearning, in which a model actively iterates on itself to remove the influence of “forgotten” data that it might have been trained on, as mentioned below.

14.8 Privacy-Preserving ML Techniques

Many techniques have been developed to preserve privacy, each addressing different aspects and data security challenges. These methods can be broadly categorized into several key areas: Differential Privacy, which focuses on statistical privacy in data outputs; Federated Learning, emphasizing decentralized data processing; Homomorphic Encryption and Secure Multi-party Computation (SMC), both enabling secure computations on encrypted or private data; Data Anonymization and Data Masking and Obfuscation, which alter data to protect individual identities; Private Set Intersection and Zero-Knowledge Proofs, facilitating secure data comparisons and validations; Decentralized Identifiers (DIDs) for self-sovereign digital identities; Privacy-Preserving Record Linkage (PPRL), linking data across sources without exposure; Synthetic Data Generation, creating artificial datasets for safe analysis; and Adversarial Learning Techniques, enhancing data or model resistance to privacy attacks.

Given the extensive range of these techniques, it is not feasible to dive into each in depth within a single course or discussion, let alone for anyone to know it all in its glorious detail. Therefore, we will explore a few specific techniques in relative detail, providing a deeper understanding of their principles, applications, and the unique privacy challenges they address in machine learning. This focused approach will give us a more comprehensive and practical understanding of key privacy-preserving methods in modern ML systems.

14.8.1 Differential Privacy

Core Idea

Differential Privacy is a framework for quantifying and managing the privacy of individuals in a dataset (Dwork et al. 2006). It provides a mathematical guarantee that the privacy of individuals in the dataset will not be compromised, regardless of any additional knowledge an attacker may possess. The core idea of differential Privacy is that the outcome of any analysis (like a statistical query) should be essentially the same, whether any individual’s data is included in the dataset or not. This means that by observing the analysis result, one cannot determine whether any individual’s data was used in the computation.

Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. “Calibrating Noise to Sensitivity in Private Data Analysis.” In Theory of Cryptography, edited by Shai Halevi and Tal Rabin, 265–84. Berlin, Heidelberg: Springer Berlin Heidelberg.

For example, let’s say a database contains medical records for 10 patients. We want to release statistics about the prevalence of diabetes in this sample without revealing one patient’s condition. To do this, we could add a small amount of random noise to the true count before releasing it. If the true number of diabetes patients is 6, we might add noise from a Laplace distribution to randomly output 5, 6, or 7 each with some probability. An observer now can’t tell if any single patient has diabetes based only on the noisy output. The query result looks similar to whether each patient’s data is included or excluded. This is differential Privacy. More formally, a randomized algorithm satisfies ε-differential Privacy if, for any neighbor databases D and Dʹ differing by only one entry, the probability of any outcome changes by at most a factor of ε. A lower ε provides stronger privacy guarantees.

The Laplace Mechanism is one of the most straightforward and commonly used methods to achieve differential Privacy. It involves adding noise that follows a Laplace distribution to the data or query results. Apart from the Laplace Mechanism, the general principle of adding noise is central to differential Privacy. The idea is to add random noise to the data or the results of a query. The noise is calibrated to ensure the necessary privacy guarantee while keeping the data useful.

While the Laplace distribution is common, other distributions like Gaussian can also be used. Laplace noise is used for strict ε-differential Privacy for low-sensitivity queries. In contrast, Gaussian distributions can be used when Privacy is not guaranteed, known as (ϵ, 𝛿)-Differential Privacy. In this relaxed version of differential Privacy, epsilon and delta define the amount of Privacy guaranteed when releasing information or a model related to a dataset. Epsilon sets a bound on how much information can be learned about the data based on the output. At the same time, delta allows for a small probability of the privacy guarantee to be violated. The choice between Laplace, Gaussian, and other distributions will depend on the specific requirements of the query and the dataset and the tradeoff between Privacy and accuracy.

To illustrate the tradeoff of Privacy and accuracy in (\(\epsilon\), \(\delta\))-differential Privacy, the following graphs in Figure 14.12 show the results on accuracy for different noise levels on the MNIST dataset, a large dataset of handwritten digits (Abadi et al. 2016). The delta value (black line; right y-axis) denotes the level of privacy relaxation (a high value means Privacy is less stringent). As Privacy becomes more relaxed, the accuracy of the model increases.

Figure 14.12: Privacy-accuracy tradeoff. Source: Abadi et al. (2016).
Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. CCS ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2976749.2978318.

The key points to remember about differential Privacy are the following:

  • Adding Noise: The fundamental technique in differential Privacy is adding controlled random noise to the data or query results. This noise masks the contribution of individual data points.

  • Balancing Act: There’s a balance between Privacy and accuracy. More noise (lower ϵ) in the data means higher Privacy but less accuracy in the model’s results.

  • Universality: Differential Privacy doesn’t rely on assumptions about what an attacker knows. This makes it robust against re-identification attacks, where an attacker tries to uncover individual data.

  • Applicability: It can be applied to various types of data and queries, making it a versatile tool for privacy-preserving data analysis.

Tradeoffs

There are several tradeoffs to make with differential Privacy, as is the case with any algorithm. But let’s focus on the computational-specific tradeoffs since we care about ML systems. There are some key computational considerations and tradeoffs when implementing differential Privacy in a machine-learning system:

Noise generation: Implementing differential Privacy introduces several important computational tradeoffs compared to standard machine learning techniques. One major consideration is the need to securely generate random noise from distributions like Laplace or Gaussian that get added to query results and model outputs. High-quality cryptographic random number generation can be computationally expensive.

Sensitivity analysis: Another key requirement is rigorously tracking the sensitivity of the underlying algorithms to single data points getting added or removed. This global sensitivity analysis is required to calibrate the noise levels properly. However, analyzing worst-case sensitivity can substantially increase computational complexity for complex model training procedures and data pipelines.

Privacy budget management: Managing the privacy loss budget across multiple queries and learning iterations is another bookkeeping overhead. The system must keep track of cumulative privacy costs and compose them to explain overall privacy guarantees. This adds a computational burden beyond just running queries or training models.

Batch vs. online tradeoffs: For online learning systems with continuous high-volume queries, differentially private algorithms require new mechanisms to maintain utility and prevent too much accumulated privacy loss since each query can potentially alter the privacy budget. Batch offline processing is simpler from a computational perspective as it processes data in large batches, where each batch is treated as a single query. High-dimensional sparse data also increases sensitivity analysis challenges.

Distributed training: When training models using distributed or federated approaches, new cryptographic protocols are needed to track and bound privacy leakage across nodes. Secure multiparty computation with encrypted data for differential Privacy adds substantial computational load.

While differential Privacy provides strong formal privacy guarantees, implementing it rigorously requires additions and modifications to the machine learning pipeline at a computational cost. Managing these overheads while preserving model accuracy remains an active research area.

Case Study

Apple’s implementation of differential Privacy in iOS and MacOS provides a prominent real-world example of how differential Privacy can be deployed at large scale. Apple wanted to collect aggregated usage statistics across their ecosystem to improve products and services, but aimed to do so without compromising individual user privacy.

To achieve this, they implemented differential privacy techniques directly on user devices to anonymize data points before sending them to Apple servers. Specifically, Apple uses the Laplace mechanism to inject carefully calibrated random noise. For example, suppose a user’s location history contains [Work, Home, Work, Gym, Work, Home]. In that case, the differentially private version might replace the exact locations with a noisy sample like [Gym, Home, Work, Work, Home, Work].

Apple tunes the Laplace noise distribution to provide a high level of Privacy while preserving the utility of aggregated statistics. Increasing noise levels provides stronger privacy guarantees (lower ε values in DP terminology) but can reduce data utility. Apple’s privacy engineers empirically optimized this tradeoff based on their product goals.

Apple obtains high-fidelity aggregated statistics by aggregating hundreds of millions of noisy data points from devices. For instance, they can analyze new iOS apps’ features while masking any user’s app behaviors. On-device computation avoids sending raw data to Apple servers.

The system uses hardware-based secure random number generation to sample from the Laplace distribution on devices efficiently. Apple also had to optimize its differentially private algorithms and pipeline to operate under the computational constraints of consumer hardware.

Multiple third-party audits have verified that Apple’s system provides rigorous differential privacy protections in line with their stated policies. Of course, assumptions around composition over time and potential re-identification risks still apply. Apple’s deployment shows how differential Privacy can be realized in large real-world products when backed by sufficient engineering resources.

Want to train an ML model without compromising anyone’s secrets? Differential Privacy is like a superpower for your data! In this Colab, we’ll use TensorFlow Privacy to add special noise during training. This makes it way harder for anyone to determine if a single person’s data was used, even if they have sneaky ways of peeking at the model.

14.8.2 Federated Learning

Core Idea

Federated Learning (FL) is a type of machine learning in which a model is built and distributed across multiple devices or servers while keeping the training data localized. It was previously discussed in the Model Optimizations chapter. Still, we will recap it here briefly to complete it and focus on things that pertain to this chapter.

FL trains machine learning models across decentralized networks of devices or systems while keeping all training data localized. Figure 14.13 illustrates this process: each participating device leverages its local data to calculate model updates, which are then aggregated to build an improved global model. However, the raw training data is never directly shared, transferred, or compiled. This privacy-preserving approach allows for the joint development of ML models without centralizing the potentially sensitive training data in one place.

Figure 14.13: Federated Learning lifecycle. Source: Jin et al. (2020).
Jin, Yilun, Xiguang Wei, Yang Liu, and Qiang Yang. 2020. “Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective.” arXiv Preprint arXiv:2002.11545.

One of the most common model aggregation algorithms is Federated Averaging (FedAvg), where the global model is created by averaging all of the parameters from local parameters. While FedAvg works well with independent and identically distributed data (IID), alternate algorithms like Federated Proximal (FedProx) are crucial in real-world applications where data is often non-IID. FedProx is designed for the FL process when there is significant heterogeneity in the client updates due to diverse data distributions across devices, computational capabilities, or varied amounts of data.

By leaving the raw data distributed and exchanging only temporary model updates, federated learning provides a more secure and privacy-enhancing alternative to traditional centralized machine learning pipelines. This allows organizations and users to benefit collaboratively from shared models while maintaining control and ownership over sensitive data. The decentralized nature of FL also makes it robust to single points of failure.

Imagine a group of hospitals that want to collaborate on a study to predict patient outcomes based on their symptoms. However, they cannot share their patient data due to privacy concerns and regulations like HIPAA. Here’s how Federated Learning can help.

  • Local Training: Each hospital trains a machine learning model on patient data. This training happens locally, meaning the data never leaves the hospital’s servers.

  • Model Sharing: After training, each hospital only sends the model (specifically, its parameters or weights ) to a central server. It does not send any patient data.

  • Aggregating Models: The central server aggregates these models from all hospitals into a single, more robust model. This process typically involves averaging the model parameters.

  • Benefit: The result is a machine learning model that has learned from a wide range of patient data without sharing sensitive data or removing it from its original location.

Tradeoffs

There are several system performance-related aspects of FL in machine learning systems. It would be wise to understand these tradeoffs because there is no “free lunch” for preserving Privacy through FL (Li et al. 2020).

Li, Tian, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. “Federated Learning: Challenges, Methods, and Future Directions.” IEEE Signal Process Mag. 37 (3): 50–60. https://doi.org/10.1109/msp.2020.2975749.

Communication Overhead and Network Constraints: In FL, one of the most significant challenges is managing the communication overhead. This involves the frequent transmission of model updates between a central server and numerous client devices, which can be bandwidth-intensive. The total number of communication rounds and the size of transmitted messages per round need to be reduced to minimize communication further. This can lead to substantial network traffic, especially in scenarios with many participants. Additionally, latency becomes a critical factor — the time taken for these updates to be sent, aggregated, and redistributed can introduce delays. This affects the overall training time and impacts the system’s responsiveness and real-time capabilities. Managing this communication while minimizing bandwidth usage and latency is crucial for implementing FL.

Computational Load on Local Devices: FL relies on client devices (like smartphones or IoT devices, which especially matter in TinyML) for model training, which often have limited computational power and battery life. Running complex machine learning algorithms locally can strain these resources, leading to potential performance issues. Moreover, the capabilities of these devices can vary significantly, resulting in uneven contributions to the model training process. Some devices process updates faster and more efficiently than others, leading to disparities in the learning process. Balancing the computational load to ensure consistent participation and efficiency across all devices is a key challenge in FL.

Model Training Efficiency: FL’s decentralized nature can impact model training’s efficiency. Achieving convergence, where the model no longer significantly improves, can be slower in FL than in centralized training methods. This is particularly true in cases where the data is non-IID (non-independent and identically distributed) across devices. Additionally, the algorithms used for aggregating model updates play a critical role in the training process. Their efficiency directly affects the speed and effectiveness of learning. Developing and implementing algorithms that can handle the complexities of FL while ensuring timely convergence is essential for the system’s performance.

Scalability Challenges: Scalability is a significant concern in FL, especially as the number of participating devices increases. Managing and coordinating model updates from many devices adds complexity and can strain the system. Ensuring that the system architecture can efficiently handle this increased load without degrading performance is crucial. This involves not just handling the computational and communication aspects but also maintaining the quality and consistency of the model as the scale of the operation grows. A key challenge is designing FL systems that scale effectively while maintaining performance.

Data Synchronization and Consistency: Ensuring data synchronization and maintaining model consistency across all participating devices in FL is challenging. Keeping all devices synchronized with the latest model version can be difficult in environments with intermittent connectivity or devices that go offline periodically. Furthermore, maintaining consistency in the learned model, especially when dealing with a wide range of devices with different data distributions and update frequencies, is crucial. This requires sophisticated synchronization and aggregation strategies to ensure that the final model accurately reflects the learnings from all devices.

Energy Consumption: The energy consumption of client devices in FL is a critical factor, particularly for battery-powered devices like smartphones and other TinyML/IoT devices. The computational demands of training models locally can lead to significant battery drain, which might discourage continuous participation in the FL process. Balancing the computational requirements of model training with energy efficiency is essential. This involves optimizing algorithms and training processes to reduce energy consumption while achieving effective learning outcomes. Ensuring energy-efficient operation is key to user acceptance and the sustainability of FL systems.

Case Studies

Here are a couple of real-world case studies that can illustrate the use of federated learning:

Google Gboard

Google uses federated learning to improve predictions on its Gboard mobile keyboard app. The app runs a federated learning algorithm on users’ devices to learn from their local usage patterns and text predictions while keeping user data private. The model updates are aggregated in the cloud to produce an enhanced global model. This allows for providing next-word predictions personalized to each user’s typing style while avoiding directly collecting sensitive typing data. Google reported that the federated learning approach reduced prediction errors by 25% compared to the baseline while preserving Privacy.

Healthcare Research

The UK Biobank and American College of Cardiology combined datasets to train a model for heart arrhythmia detection using federated learning. The datasets could not be combined directly due to legal and Privacy restrictions. Federated learning allowed collaborative model development without sharing protected health data, with only model updates exchanged between the parties. This improved model accuracy as it could leverage a wider diversity of training data while meeting regulatory requirements.

Financial Services

Banks are exploring using federated learning for anti-money laundering (AML) detection models. Multiple banks could jointly improve AML Models without sharing confidential customer transaction data with competitors or third parties. Only the model updates need to be aggregated rather than raw transaction data. This allows access to richer training data from diverse sources while avoiding regulatory and confidentiality issues around sharing sensitive financial customer data.

These examples demonstrate how federated learning provides tangible privacy benefits and enables collaborative ML in settings where direct data sharing is impossible.

14.8.3 Machine Unlearning

Core Idea

Machine unlearning is a fairly new process that describes how the influence of a subset of training data can be removed from the model. Several methods have been used to perform machine unlearning and remove the influence of a subset of training data from the final model. A baseline approach might consist of simply fine-tuning the model for more epochs on just the data that should be remembered to decrease the influence of the data “forgotten” by the model. Since this approach doesn’t explicitly remove the influence of data that should be erased, membership inference attacks are still possible, so researchers have adopted other approaches to unlearn data from a model explicitly. One type of approach that researchers have adopted includes adjusting the model loss function to treat the losses of the “forget set explicitly” (data to be unlearned) and the “retain set” (remaining data that should still be remembered) differently (Tarun et al. 2022; Khan and Swaroop 2021). Figure 14.14 illustrates some of the applications of Machine-unlearning.

Tarun, Ayush K, Vikram S Chundawat, Murari Mandal, and Mohan Kankanhalli. 2022. “Deep Regression Unlearning.” ArXiv Preprint abs/2210.08196. https://arxiv.org/abs/2210.08196.
Khan, Mohammad Emtiyaz, and Siddharth Swaroop. 2021. “Knowledge-Adaptation Priors.” In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual, edited by Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, 19757–70. https://proceedings.neurips.cc/paper/2021/hash/a4380923dd651c195b1631af7c829187-Abstract.html.
Figure 14.14: Applications of Machine Unlearning. Source: BBVA OpenMind

Case Study

Some researchers have demonstrated a real-life example of machine unlearning approaches applied to SOTA machine learning models through training an LLM, LLaMA2-7b, to unlearn any references to Harry Potter (Eldan and Russinovich 2023). Though this model took 184K GPU hours to pre-train, it only took 1 GPU hour of fine-tuning to erase the model’s ability to generate or recall Harry Potter-related content without noticeably compromising the accuracy of generating content unrelated to Harry Potter. Figure 14.15 demonstrates how the model output changes before (Llama-7b-chat-hf column) and after (Finetuned Llama-b column) unlearning has occurred.

Figure 14.15: Llama unlearning Harry Potter. Source: Eldan and Russinovich (2023).
Eldan, Ronen, and Mark Russinovich. 2023. “Who’s Harry Potter? Approximate Unlearning in LLMs.” ArXiv Preprint abs/2310.02238. https://arxiv.org/abs/2310.02238.

Other Uses

Removing adversarial data

Deep learning models have previously been shown to be vulnerable to adversarial attacks, in which the attacker generates adversarial data similar to the original training data, where a human cannot tell the difference between the real and fabricated data. The adversarial data results in the model outputting incorrect predictions, which could have detrimental consequences in various applications, including healthcare diagnosis predictions. Machine unlearning has been used to unlearn the influence of adversarial data to prevent these incorrect predictions from occurring and causing any harm.

14.8.4 Homomorphic Encryption

Core Idea

Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext, generating an encrypted result that, when decrypted, matches the result of operations performed on the plaintext. For example, multiplying two numbers encrypted with homomorphic encryption produces an encrypted product that decrypts the actual product of the two numbers. This means that data can be processed in an encrypted form, and only the resulting output needs to be decrypted, significantly enhancing data security, especially for sensitive information.

Homomorphic encryption enables outsourced computation on encrypted data without exposing the data itself to the external party performing the operations. However, only certain computations like addition and multiplication are supported in partially homomorphic schemes. Fully homomorphic encryption (FHE) that can handle any computation is even more complex. The number of possible operations is limited before noise accumulation corrupts the ciphertext.

To use homomorphic encryption across different entities, carefully generated public keys must be exchanged for operations across separately encrypted data. This advanced encryption technique enables previously impossible secure computation paradigms but requires expertise to implement correctly for real-world systems.

Benefits

Homomorphic encryption enables machine learning model training and inference on encrypted data, ensuring that sensitive inputs and intermediate values remain confidential. This is critical in healthcare, finance, genetics, and other domains, which are increasingly relying on ML to analyze sensitive and regulated data sets containing billions of personal records.

Homomorphic encryption thwarts attacks like model extraction and membership inference that could expose private data used in ML workflows. It provides an alternative to TEEs using hardware enclaves for confidential computing. However, current schemes have high computational overheads and algorithmic limitations that constrain real-world applications.

Homomorphic encryption realizes the decades-old vision of secure multiparty computation by allowing computation on ciphertexts. Conceptualized in the 1970s, the first fully homomorphic cryptosystems emerged in 2009, enabling arbitrary computations. Ongoing research is making these techniques more efficient and practical.

Homomorphic encryption shows great promise in enabling privacy-preserving machine learning under emerging data regulations. However, given constraints, one should carefully evaluate its applicability against other confidential computing approaches. Extensive resources exist to explore homomorphic encryption and track progress in easing adoption barriers.

Mechanics

  1. Data Encryption: Before data is processed or sent to an ML model, it is encrypted using a homomorphic encryption scheme and public key. For example, encrypting numbers \(x\) and \(y\) generates ciphertexts \(E(x)\) and \(E(y)\).

  2. Computation on Ciphertext: The ML algorithm processes the encrypted data directly. For instance, multiplying the ciphertexts \(E(x)\) and \(E(y)\) generates \(E(xy)\). More complex model training can also be done on ciphertexts.

  3. Result Encryption: The result \(E(xy)\) remains encrypted and can only be decrypted by someone with the corresponding private key to reveal the actual product \(xy\).

Only authorized parties with the private key can decrypt the final outputs, protecting the intermediate state. However, noise accumulates with each operation, preventing further computation without decryption.

Beyond healthcare, homomorphic encryption enables confidential computing for applications like financial fraud detection, insurance analytics, genetics research, and more. It offers an alternative to techniques like multiparty computation and TEEs. Ongoing research improves the efficiency and capabilities.

Tools like HElib, SEAL, and TensorFlow HE provide libraries for exploring implementing homomorphic encryption in real-world machine learning pipelines.

Tradeoffs

For many real-time and embedded applications, fully homomorphic encryption remains impractical for the following reasons.

Computational Overhead: Homomorphic encryption imposes very high computational overheads, often resulting in slowdowns of over 100x for real-world ML applications. This makes it impractical for many time-sensitive or resource-constrained uses. Optimized hardware and parallelization can alleviate but not eliminate this issue.

Complexity of Implementation The sophisticated algorithms require deep expertise in cryptography to be implemented correctly. Nuances like format compatibility with floating point ML models and scalable key management pose hurdles. This complexity hinders widespread practical adoption.

Algorithmic Limitations: Current schemes restrict the functions and depth of computations supported, limiting the models and data volumes that can be processed. Ongoing research is pushing these boundaries, but restrictions remain.

Hardware Acceleration: Homomorphic encryption requires specialized hardware, such as secure processors or coprocessors with TEEs, which adds design and infrastructure costs.

Hybrid Designs: Rather than encrypting entire workflows, selective application of homomorphic encryption to critical subcomponents can achieve protection while minimizing overheads.

Ready to unlock the power of encrypted computation? Homomorphic encryption is like a magic trick for your data! In this Colab, we’ll learn how to do calculations on secret numbers without ever revealing them. Imagine training a model on data you can’t even see – that’s the power of this mind-bending technology.

14.8.5 Secure Multiparty Communication

Core Idea

The overarching goal of Multi-Party Communication (MPC) is to enable different parties to jointly compute a function over their inputs while keeping those inputs private. For example, two organizations may want to collaborate on training a machine learning model by combining their respective data sets. Still, they cannot directly reveal that data due to Privacy or confidentiality constraints. MPC provides protocols and techniques that allow them to achieve the benefits of pooled data for model accuracy without compromising the privacy of each organization’s sensitive data.

At a high level, MPC works by carefully splitting the computation into parts that each party can execute independently using their private input. The results are then combined to reveal only the final output of the function and nothing about the intermediate values. Cryptographic techniques are used to guarantee that the partial results remain private provably.

Let’s take a simple example of an MPC protocol. One of the most basic MPC protocols is the secure addition of two numbers. Each party splits its input into random shares that are secretly distributed. They exchange the shares and locally compute the sum of the shares, which reconstructs the final sum without revealing the individual inputs. For example, if Alice has input x and Bob has input y:

  1. Alice generates random \(x_1\) and sets \(x_2 = x - x_1\)

  2. Bob generates random \(y_1\) and sets \(y_2 = y - y_1\)

  3. Alice sends \(x_1\) to Bob, Bob sends \(y_1\) to Alice (keeping \(x_2\) and \(y_2\) secret)

  4. Alice computes \(x_2 + y_1 = s_1\), Bob computes \(x_1 + y_2 = s_2\)

  5. \(s_1 + s_2 = x + y\) is the final sum, without revealing \(x\) or \(y\).

Alice’s and Bob’s individual inputs (\(x\) and \(y\)) remain private, and each party only reveals one number associated with their original inputs. The random outputs ensure that no information about the original numbers disclosed.

Secure Comparison: Another basic operation is a secure comparison of two numbers, determining which is greater than the other. This can be done using techniques like Yao’s Garbled Circuits, where the comparison circuit is encrypted to allow joint evaluation of the inputs without leaking them.

Secure Matrix Multiplication: Matrix operations like multiplication are essential for machine learning. MPC techniques like additive secret sharing can be used to split matrices into random shares, compute products on the shares, and then reconstruct the result.

Secure Model Training: Distributed machine learning training algorithms like federated averaging can be made secure using MPC. Model updates computed on partitioned data at each node are secretly shared between nodes and aggregated to train the global model without exposing individual updates.

The core idea behind MPC protocols is to divide the computation into steps that can be executed jointly without revealing intermediate sensitive data. This is accomplished by combining cryptographic techniques like secret sharing, homomorphic encryption, oblivious transfer, and garbled circuits. MPC protocols enable the collaborative computation of sensitive data while providing provable privacy guarantees. This privacy-preserving capability is essential for many machine learning applications today involving multiple parties that cannot directly share their raw data.

The main approaches used in MPC include:

  • Homomorphic encryption: Special encryption allows computations to be carried out on encrypted data without decrypting it.

  • Secret sharing: The private data is divided into random shares distributed to each party. Computations are done locally on the shares and finally reconstructed.

  • Oblivious transfer: A protocol where a receiver obtains a subset of data from a sender, but the sender does not know which specific data was transferred.

  • Garbled circuits: The function to be computed is represented as a Boolean circuit that is encrypted (“garbled”) to allow joint evaluation without revealing inputs.

Tradeoffs

While MPC protocols provide strong privacy guarantees, they come at a high computational cost compared to plain computations. Every secure operation, like addition, multiplication, comparison, etc., requires more processing orders than the equivalent unencrypted operation. This overhead stems from the underlying cryptographic techniques:

  • In partially homomorphic encryption, each computation on ciphertexts requires costly public-key operations. Fully homomorphic encryption has even higher overheads.

  • Secret sharing divides data into multiple shares, so even basic operations require manipulating many shares.

  • Oblivious transfer and garbled circuits add masking and encryption to hide data access patterns and execution flows.

  • MPC systems require extensive communication and interaction between parties to compute on shares/ciphertexts jointly.

As a result, MPC protocols can slow down computations by 3-4 orders of magnitude compared to plain implementations. This becomes prohibitively expensive for large datasets and models. Therefore, training machine learning models on encrypted data using MPC remains infeasible today for realistic dataset sizes due to the overhead. Clever optimizations and approximations are needed to make MPC practical.

Ongoing MPC research closes this efficiency gap through cryptographic advances, new algorithms, trusted hardware like SGX enclaves, and leveraging accelerators like GPUs/TPUs. However, in the foreseeable future, some degree of approximation and performance tradeoff is needed to scale MPC to meet the demands of real-world machine learning systems.

14.8.6 Synthetic Data Generation

Core Idea

Synthetic data generation has emerged as an important privacy-preserving machine learning approach that allows models to be developed and tested without exposing real user data. The key idea is to train generative models on real-world datasets and then sample from these models to synthesize artificial data that statistically match the original data distribution but does not contain actual user information. For example, a GAN could be trained on a dataset of sensitive medical records to learn the underlying patterns and then used to sample synthetic patient data.

The primary challenge of synthesizing data is to ensure adversaries are unable to re-identify the original dataset. A simple approach to achieving synthetic data is adding noise to the original dataset, which still risks privacy leakage. When noise is added to data in the context of differential privacy, sophisticated mechanisms based on the data’s sensitivity are used to calibrate the amount and distribution of noise. Through these mathematically rigorous frameworks, differential Privacy generally guarantees Privacy at some level, which is the primary goal of this privacy-preserving technique. Beyond preserving privacy, synthetic data combats multiple data availability issues such as imbalanced datasets, scarce datasets, and anomaly detection.

Researchers can freely share this synthetic data and collaborate on modeling without revealing private medical information. Well-constructed synthetic data protects Privacy while providing utility for developing accurate models. Key techniques to prevent reconstructing the original data include adding differential privacy noise during training, enforcing plausibility constraints, and using multiple diverse generative models. Here are some common approaches for generating synthetic data:

  • Generative Adversarial Networks (GANs): GANs are an AI algorithm used in unsupervised learning where two neural networks compete against each other in a game. Figure 14.16 is an overview of the GAN system. The generator network (big red box) is responsible for producing the synthetic data, and the discriminator network (yellow box) evaluates the authenticity of the data by distinguishing between fake data created by the generator network and the real data. The generator and discriminator networks learn and update their parameters based on the results. The discriminator acts as a metric on how similar the fake and real data are to one another. It is highly effective at generating realistic data and is a popular approach for generating synthetic data.
Figure 14.16: Flowchart of GANs. Source: Rosa and Papa (2021).
Rosa, Gustavo H. de, and João P. Papa. 2021. “A Survey on Text Generation Using Generative Adversarial Networks.” Pattern Recogn. 119 (November): 108098. https://doi.org/10.1016/j.patcog.2021.108098.
  • Variational Autoencoders (VAEs): VAEs are neural networks capable of learning complex probability distributions and balancing data generation quality and computational efficiency. They encode data into a latent space where they learn the distribution to decode the data back.

  • Data Augmentation: This involves transforming existing data to create new, altered data. For example, flipping, rotating, and scaling (uniformly or non-uniformly) original images can help create a more diverse, robust image dataset before training an ML model.

  • Simulations: Mathematical models can simulate real-world systems or processes to mimic real-world phenomena. This is highly useful in scientific research, urban planning, and economics.

Benefits

While synthetic data may be necessary due to Privacy or compliance risks, it is widely used in machine learning models when available data is of poor quality, scarce, or inaccessible. Synthetic data offers more efficient and effective development by streamlining robust model training, testing, and deployment processes. It allows researchers to share models more widely without breaching privacy laws and regulations. Collaboration between users of the same dataset will be facilitated, which will help broaden the capabilities and advancements in ML research.

There are several motivations for using synthetic data in machine learning:

  • Privacy and compliance: Synthetic data avoids exposing personal information, allowing more open sharing and collaboration. This is important when working with sensitive datasets like healthcare records or financial information.

  • Data scarcity: When insufficient real-world data is available, synthetic data can augment training datasets. This improves model accuracy when limited data is a bottleneck.

  • Model testing: Synthetic data provides privacy-safe sandboxes for testing model performance, debugging issues, and monitoring for bias.

  • Data labeling: High-quality labeled training data is often scarce and expensive. Synthetic data can help auto-generate labeled examples.

Tradeoffs

While synthetic data tries to remove any evidence of the original dataset, privacy leakage is still a risk since the synthetic data mimics the original data. The statistical information and distribution are similar, if not the same, between the original and synthetic data. By resampling from the distribution, adversaries may still be able to recover the original training samples. Due to their inherent learning processes and complexities, neural networks might accidentally reveal sensitive information about the original training data.

A core challenge with synthetic data is the potential gap between synthetic and real-world data distributions. Despite advancements in generative modeling techniques, synthetic data may only partially capture real data’s complexity, diversity, and nuanced patterns. This can limit the utility of synthetic data for robustly training machine learning models. Rigorously evaluating synthetic data quality through adversary methods and comparing model performance to real data benchmarks helps assess and improve fidelity. However, inherently, synthetic data remains an approximation.

Another critical concern is the privacy risks of synthetic data. Generative models may leak identifiable information about individuals in the training data, which could enable reconstruction of private information. Emerging adversarial attacks demonstrate the challenges in preventing identity leakage from synthetic data generation pipelines. Techniques like differential Privacy can help safeguard Privacy but come with tradeoffs in data utility. There is an inherent tension between producing useful synthetic data and fully protecting sensitive training data, which must be balanced.

Additional pitfalls of synthetic data include amplified biases, mislabeling, the computational overhead of training generative models, storage costs, and failure to account for out-of-distribution novel data. While these are secondary to the core synthetic-real gap and privacy risks, they remain important considerations when evaluating the suitability of synthetic data for particular machine-learning tasks. As with any technique, the advantages of synthetic data come with inherent tradeoffs and limitations that require thoughtful mitigation strategies.

14.8.7 Summary

While all the techniques we have discussed thus far aim to enable privacy-preserving machine learning, they involve distinct mechanisms and tradeoffs. Factors like computational constraints, required trust assumptions, threat models, and data characteristics help guide the selection process for a particular use case. However, finding the right balance between Privacy, accuracy, and efficiency necessitates experimentation and empirical evaluation for many applications. Table 14.2 is a comparison table of the key privacy-preserving machine learning techniques and their pros and cons:

Table 14.2: Comparing techniques for privacy-preserving machine learning.
Technique Pros Cons
Differential Privacy
  • Strong formal privacy guarantees
  • Robust to auxiliary data attacks
  • Versatile for many data types and analyses
  • Accuracy loss from noise addition
  • Computational overhead for sensitivity analysis and noise generation
Federated Learning
  • Allows collaborative learning without sharing raw data
  • Data remains decentralized improving security
  • No need for encrypted computation
  • Increased communication overhead
  • Potentially slower model convergence
  • Uneven client device capabilities
Secure Multi-Party Computation
  • Enables joint computation on sensitive data
  • Provides cryptographic privacy guarantees
  • Flexible protocols for various functions
  • Very high computational overhead
  • Complexity of implementation
  • Algorithmic constraints on function depth
Homomorphic Encryption
  • Allows computation on encrypted data
  • Prevents intermediate state exposure
  • Extremely high computational cost
  • Complex cryptographic implementations
  • Restrictions on function types
Synthetic Data Generation
  • Enables data sharing without leakage
  • Mitigates data scarcity problems
  • Synthetic-real gap in distributions
  • Potential for reconstructing private data
  • Biases and labeling challenges

14.9 Conclusion

Machine learning hardware security is critical as embedded ML systems are increasingly deployed in safety-critical domains like medical devices, industrial controls, and autonomous vehicles. We have explored various threats spanning hardware bugs, physical attacks, side channels, supply chain risks, etc. Defenses like TEEs, Secure Boot, PUFs, and hardware security modules provide multilayer protection tailored for resource-constrained embedded devices.

However, continual vigilance is essential to track emerging attack vectors and address potential vulnerabilities through secure engineering practices across the hardware lifecycle. As ML and embedded ML spread, maintaining rigorous security foundations that match the field’s accelerating pace of innovation remains imperative.

14.10 Resources

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

Slides

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to improve their understanding and facilitate effective knowledge transfer.

Exercises

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

Labs

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

  • Coming soon.