Layer‐Level Adaptive Gradient Perturbation Protecting Deep Learning Based on Differential Privacy
ABSTRACT Deep learning’s widespread dependence on large datasets raises privacy concerns due to the potential presence of sensitive information. Differential privacy stands out as a crucial method for preserving privacy, garnering significant interest for its ability to offer robust and verifiable p...
Saved in:
Main Authors: | Zhang Xiangfei, Zhang Qingchen, Jiang Liming |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2025-06-01
|
Series: | CAAI Transactions on Intelligence Technology |
Subjects: | |
Online Access: | https://doi.org/10.1049/cit2.70008 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Context-aware Location Privacy Protection Method
by: Haohua Qing, et al.
Published: (2024-10-01) -
On protecting the data privacy of Large Language Models (LLMs) and LLM agents: A literature review
by: Biwei Yan, et al.
Published: (2025-06-01) -
Survey of differentially private methods for trajectory data
by: SUN Xinyue, et al.
Published: (2025-06-01) -
Integrating Fractional-Order Hopfield Neural Network with Differentiated Encryption: Achieving High-Performance Privacy Protection for Medical Images
by: Wei Feng, et al.
Published: (2025-06-01) -
A Comprehensive Survey of Security and Privacy in UAV Systems
by: Bryce Cordill, et al.
Published: (2025-01-01)