Authors :
Darsi Venkata Varalakshmi; Garige Sangeetha; Dammu Nikhitha; Dr. Z. Sunitha Bai
Volume/Issue :
Volume 11 - 2026, Issue 4 - April
Google Scholar :
https://tinyurl.com/ak7ch9t6
Scribd :
https://tinyurl.com/dthvarmb
DOI :
https://doi.org/10.38124/ijisrt/26apr1499
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Federated learning provides an option to train models cooperatively without the need for sharing data centrally.
Yet, incorporating class-incremental learning in federated learning gives rise to challenges like catastrophic forgetting,
heterogeneity of the data, and data privacy concerns. To solve these problems, this paper presents a new method of
Privacy-Preserving Federated Class-Incremental Learning (PP-FCIL), where the method is tested against the CIFAR-100
benchmark. The proposed solution uses a two-head backbone and channel attention mechanism to retain prior knowledge
as well as adapt to newly introduced classes in each stage of learning. Also, to prevent forgetting and class imbalance, this
paper proposes exemplar memory using herding, knowledge distillation, supervised contrastive learning, and balanced
softmax loss as part of one training process. Different from traditional federated learning methods, the proposed
framework utilizes Bayesian Differential Privacy in the clients that clips gradients and adds Gaussian noise to protect data
privacy. Furthermore, to measure cumulative privacy loss, a strict privacy accounting protocol is used. Also, a multifactor aggregation protocol is adopted to weigh the importance of client contribution in a non-IID setting.
Keywords :
Federated Learning; Class-Incremental Learning; Privacy-Preserving Machine Learning; Bayesian Differential Privacy; Continual Learning; Knowledge Distillation; Exemplar Memory; Non-IID Data Distribution; Channel Attention Fusion; CIFAR-100; Distributed Deep Learning; Catastrophic Forgetting.
References :
- A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” University of Toronto, 2009.
- B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proc. AISTATS, 2017.
- J. Konečný, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated Optimization: Distributed Machine Learning for On-Device Intelligence,” 2016.
- D. Lopez-Paz and M. Ranzato, “Gradient Episodic Memory for Continual Learning,” in Advances in Neural Information Processing Systems (NeurIPS), 2017.
- S.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, “iCaRL: Incremental Classifier and Representation Learning,” in Proc. CVPR, 2017.
- Y. Li and Y. Hoiem, “Learning without Forgetting,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
- M. He, X. Chen, J. Wang, and X. Chen, “Federated Learning with Non-IID Data: A Survey,” IEEE Transactions on Neural Networks and Learning Systems, 2021.
- N. A. Smith, “Supervised Contrastive Learning,” in Advances in Neural Information Processing Systems (NeurIPS), 2020.
- T. Zhang, Z. Wang, and X. Liu, “Balanced Softmax for Long-Tailed Visual Recognition,” in Advances in Neural Information Processing Systems (NeurIPS), 2020.
- C. Dwork, A. Roth, “The Algorithmic Foundations of Differential Privacy,” Foundations and Trends in Theoretical Computer Science, 2014.
- M. Abadi et al., “Deep Learning with Differential Privacy,” in Proc. ACM CCS, 2016.
- I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, Cambridge, MA, USA: MIT Press, 2016.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Proc. CVPR, 2016.
- A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” in Proc. ICLR, 2021.
- H. Brendan McMahan and Daniel Ramage, “Federated Learning: Collaborative Machine Learning without Centralized Training Data,” Google AI Blog, 2017.
Federated learning provides an option to train models cooperatively without the need for sharing data centrally.
Yet, incorporating class-incremental learning in federated learning gives rise to challenges like catastrophic forgetting,
heterogeneity of the data, and data privacy concerns. To solve these problems, this paper presents a new method of
Privacy-Preserving Federated Class-Incremental Learning (PP-FCIL), where the method is tested against the CIFAR-100
benchmark. The proposed solution uses a two-head backbone and channel attention mechanism to retain prior knowledge
as well as adapt to newly introduced classes in each stage of learning. Also, to prevent forgetting and class imbalance, this
paper proposes exemplar memory using herding, knowledge distillation, supervised contrastive learning, and balanced
softmax loss as part of one training process. Different from traditional federated learning methods, the proposed
framework utilizes Bayesian Differential Privacy in the clients that clips gradients and adds Gaussian noise to protect data
privacy. Furthermore, to measure cumulative privacy loss, a strict privacy accounting protocol is used. Also, a multifactor aggregation protocol is adopted to weigh the importance of client contribution in a non-IID setting.
Keywords :
Federated Learning; Class-Incremental Learning; Privacy-Preserving Machine Learning; Bayesian Differential Privacy; Continual Learning; Knowledge Distillation; Exemplar Memory; Non-IID Data Distribution; Channel Attention Fusion; CIFAR-100; Distributed Deep Learning; Catastrophic Forgetting.