Layer‐Level Adaptive Gradient Perturbation Protecting Deep Learning Based on Differential Privacy

ABSTRACT Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information. Differential privacy stands out as a crucial method for preserving privacy, garnering significant interest for its ability to offer robust and verifiable p...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang Xiangfei, Zhang Qingchen, Jiang Liming
Format: Article
Language:English
Published: Wiley 2025-06-01
Series:CAAI Transactions on Intelligence Technology
Subjects:
Online Access:https://doi.org/10.1049/cit2.70008
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:ABSTRACT Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information. Differential privacy stands out as a crucial method for preserving privacy, garnering significant interest for its ability to offer robust and verifiable privacy safeguards during data training. However, classic differentially private learning introduces the same level of noise into the gradients across training iterations, which affects the trade‐off between model utility and privacy guarantees. To address this issue, an adaptive differential privacy mechanism was proposed in this paper, which dynamically adjusts the privacy budget at the layer‐level as training progresses to resist member inference attacks. Specifically, an equal privacy budget is initially allocated to each layer. Subsequently, as training advances, the privacy budget for layers closer to the output is reduced (adding more noise), while the budget for layers closer to the input is increased. The adjustment magnitude depends on the training iterations and is automatically determined based on the iteration count. This dynamic allocation provides a simple process for adjusting privacy budgets, alleviating the burden on users to tweak parameters and ensuring that privacy preservation strategies align with training progress. Extensive experiments on five well‐known datasets indicate that the proposed method outperforms competing methods in terms of accuracy and resilience against membership inference attacks.
ISSN:2468-2322