EarlyExodus: Leveraging early exits to mitigate backdoor vulnerability in deep learning
The rapid migration of artificial-intelligence workloads toward edge computing significantly enhances capabilities in critical applications such as autonomous vehicles, augmented and virtual reality, and e-health, but it also heightens the urgency for robust security. However, this urgency reveals a...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-09-01
|
Series: | Results in Engineering |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2590123025024740 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1839608949643411456 |
---|---|
author | Salmane Douch M. Riduan Abid Khalid Zine-Dine Driss Bouzidi Fatima Ezzahra El Aidos Driss Benhaddou |
author_facet | Salmane Douch M. Riduan Abid Khalid Zine-Dine Driss Bouzidi Fatima Ezzahra El Aidos Driss Benhaddou |
author_sort | Salmane Douch |
collection | DOAJ |
description | The rapid migration of artificial-intelligence workloads toward edge computing significantly enhances capabilities in critical applications such as autonomous vehicles, augmented and virtual reality, and e-health, but it also heightens the urgency for robust security. However, this urgency reveals a critical gap: state-of-the-art backdoor defenses remain vulnerable to sophisticated data-poisoning attacks that subtly embed malicious triggers into training data and covertly manipulate model predictions, threatening the reliability and trustworthiness of edge-deployed AI.To counter this threat, we propose a defense mechanism that neutralizes advanced data poisoning attacks, clearly identifies maliciously targeted labels, and preserves model accuracy and integrity across diverse architectures and datasets. Our technique, EarlyExodus, integrates early-exit branches within neural networks and trains them with a divergence objective so that, for poisoned inputs, the early exit exposes the malicious label while the final exit maintains the correct classification. Extensive experiments on LeNet-5, ResNet-32, and GhostNet across MNIST, CIFAR-10, and GTSRB reduce the average attack success rate of seven recent backdoor attacks to about 3%, with clean-data accuracy degradation kept below 2%. These results demonstrate a practical, architecture-agnostic pathway toward trustworthy edge-AI systems and lay the foundation for extending backdoor defenses beyond image models to broader application domains. |
format | Article |
id | doaj-art-5d0d70b9c23a4c36b72a3ebaa04b55d0 |
institution | Matheson Library |
issn | 2590-1230 |
language | English |
publishDate | 2025-09-01 |
publisher | Elsevier |
record_format | Article |
series | Results in Engineering |
spelling | doaj-art-5d0d70b9c23a4c36b72a3ebaa04b55d02025-07-31T04:53:45ZengElsevierResults in Engineering2590-12302025-09-0127106404EarlyExodus: Leveraging early exits to mitigate backdoor vulnerability in deep learningSalmane Douch0M. Riduan Abid1Khalid Zine-Dine2Driss Bouzidi3Fatima Ezzahra El Aidos4Driss Benhaddou5National School of Computer Science and Systems Analysis (ENSIAS), Mohammed V University in Rabat, Rabat, Morocco; Corresponding author.TSYS School of Computer Science, Columbus State University, Columbus, GA, USAFaculty of Sciences (FSR), Mohammed V University in Rabat, Rabat, MoroccoNational School of Computer Science and Systems Analysis (ENSIAS), Mohammed V University in Rabat, Rabat, MoroccoCollege of Natural Sciences and Mathematics, University of Houston, Houston, USACollege of Engineering, Alfaisal University, Riyadh, Saudi ArabiaThe rapid migration of artificial-intelligence workloads toward edge computing significantly enhances capabilities in critical applications such as autonomous vehicles, augmented and virtual reality, and e-health, but it also heightens the urgency for robust security. However, this urgency reveals a critical gap: state-of-the-art backdoor defenses remain vulnerable to sophisticated data-poisoning attacks that subtly embed malicious triggers into training data and covertly manipulate model predictions, threatening the reliability and trustworthiness of edge-deployed AI.To counter this threat, we propose a defense mechanism that neutralizes advanced data poisoning attacks, clearly identifies maliciously targeted labels, and preserves model accuracy and integrity across diverse architectures and datasets. Our technique, EarlyExodus, integrates early-exit branches within neural networks and trains them with a divergence objective so that, for poisoned inputs, the early exit exposes the malicious label while the final exit maintains the correct classification. Extensive experiments on LeNet-5, ResNet-32, and GhostNet across MNIST, CIFAR-10, and GTSRB reduce the average attack success rate of seven recent backdoor attacks to about 3%, with clean-data accuracy degradation kept below 2%. These results demonstrate a practical, architecture-agnostic pathway toward trustworthy edge-AI systems and lay the foundation for extending backdoor defenses beyond image models to broader application domains.http://www.sciencedirect.com/science/article/pii/S2590123025024740Data poisoning attacksBackdoor attacksEarly exits neural networksEarlyExodus |
spellingShingle | Salmane Douch M. Riduan Abid Khalid Zine-Dine Driss Bouzidi Fatima Ezzahra El Aidos Driss Benhaddou EarlyExodus: Leveraging early exits to mitigate backdoor vulnerability in deep learning Results in Engineering Data poisoning attacks Backdoor attacks Early exits neural networks EarlyExodus |
title | EarlyExodus: Leveraging early exits to mitigate backdoor vulnerability in deep learning |
title_full | EarlyExodus: Leveraging early exits to mitigate backdoor vulnerability in deep learning |
title_fullStr | EarlyExodus: Leveraging early exits to mitigate backdoor vulnerability in deep learning |
title_full_unstemmed | EarlyExodus: Leveraging early exits to mitigate backdoor vulnerability in deep learning |
title_short | EarlyExodus: Leveraging early exits to mitigate backdoor vulnerability in deep learning |
title_sort | earlyexodus leveraging early exits to mitigate backdoor vulnerability in deep learning |
topic | Data poisoning attacks Backdoor attacks Early exits neural networks EarlyExodus |
url | http://www.sciencedirect.com/science/article/pii/S2590123025024740 |
work_keys_str_mv | AT salmanedouch earlyexodusleveragingearlyexitstomitigatebackdoorvulnerabilityindeeplearning AT mriduanabid earlyexodusleveragingearlyexitstomitigatebackdoorvulnerabilityindeeplearning AT khalidzinedine earlyexodusleveragingearlyexitstomitigatebackdoorvulnerabilityindeeplearning AT drissbouzidi earlyexodusleveragingearlyexitstomitigatebackdoorvulnerabilityindeeplearning AT fatimaezzahraelaidos earlyexodusleveragingearlyexitstomitigatebackdoorvulnerabilityindeeplearning AT drissbenhaddou earlyexodusleveragingearlyexitstomitigatebackdoorvulnerabilityindeeplearning |