25
🛡️ Strengthening Privacy in Federated Learning Against Gradient Inversion Attacks
🔺 13. Exploring the Vulnerabilities of Federated Learning: A Deep Dive into
Gradient Inversion Attacks
published on March 13
This paper focuses on the vulnerabilities of Federated Learning (FL) to Gradient Inversion Attacks (GIA), which can leak private information despite the model's privacy-preserving intentions. It categorizes existing GIA methods into three types: optimization-based, generation-based, and analytics-based, and provides a thorough analysis of their effectiveness and limitations. The study reveals that while optimization-based GIA is the most practical, it still has performance issues, whereas generation-based and analytics-based methods are less practical due to their dependencies and detectability. The authors propose a defense strategy to enhance privacy in FL frameworks and suggest future research directions to strengthen defenses against these attacks.
...
Department of Biomedical Data Science, Stanford University, Stanford, CA 94305, USA, Department of Computer Science and Engineering, University of California, Santa Cruz, CA 95064, USA, Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong 999077, China, Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China, Department of Mathematics, The University of Hong Kong, Hong Kong 999077, China, Materials Innovation Institute for Life Sciences and Energy (MILES), HKU-SIRI, Shenzhen 518055, China, School of Computing and Data Science, The University of Hong Kong, Hong Kong 999077, China, Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou 511458, China
#leakage #benchmark #security #survey #healthcare