Class Incremental Learning Method Based on Dynamic Structure Extension and Feature enhancement
DOI:
https://doi.org/10.54097/po2bv2r2Keywords:
Machine learning, Incremental learning, Feature enhancement, Structural ExpansionAbstract
With the advancement and widespread adoption of deep learning models, there has been a growing interest in class incremental learning. This approach aims to continuously learn new classes while retaining the recognition and memory capabilities for previously learned classes within an open and dynamic environment. The primary focus of class incremental learning is on maintaining the ability to learn new classes while mitigating catastrophic forgetting, thus achieving a better balance between stability and adaptability. To address this challenge, we propose an innovative method for incremental class learning that leverages dynamically representations to facilitate more efficient incremental class learning, preserving previously acquired features while adapting to new ones and effectively reducing catastrophic forgetting. Furthermore, we introduce a feature augmentation mechanism to significantly enhance the model's classification performance when incorporating new classes. This approach ensures efficient learning of both old and new classes without compromising the effectiveness of previous models. We conducted extensive experiments on two classes incremental learning benchmarks, consistently demonstrating significant performance advantages over other methods.
References
Xu M, Guo L Z, “Learning from group supervision: The impact of supervision deficiency on multi-label learning,” Science China Information Sciences,2021, vol. 64, pp.1–13.
Lippi M, Montemurro M A, Degli Esposti M, et al. Natural language statistical features of LSTM-generated texts. lEEE Transactions on Neural Networks and Learning Systems, 2019, pp.3326-3337.
Rebuffi S A, Kolesnikov A, Sperl G, Lampert C H. iCaRL: Incremental classifier and representation learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017, pp.5533−5542.
Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J,Desjardins G, Rusu A A, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 2017, vol. 144, no. 13, pp.3521−3526 .
Zenke F, Poole B, Ganguli S. Continual learning through synaptic intelligence. Proceedings of the 34th International Conference on Machine Learning (ICML).Sydney, Australia:PMLR, 2017,pp.3987−3995.
Aljundi R, Babiloni F, Elhoseiny M, Rohrbach M, TuytelaarsT. Memory aware synapses: Learning what (not) to forget. Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich, Germany: Springer, 2018, pp. 144−161.
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, et al. Generative adversarial networks.Communications of the ACM, 2020,vol. 63, no. 11,pp.139−144.
Odena A, Olah C, Shlens J. Conditional image synthesis with auxiliary classifier GANs. Proceedings of the 34th International Conference on Machine Learning (ICML). Sydney, Australia: PMLR, 2017, pp.2642−2651.
Kemker R, Kanan C. FearNet: Brain-inspired model for incremental learning. Proceedings of the 6th International Conf-erence on Learning Representations (ICLR). Vancouver,Canada: OpenReview.net, 2018.
Kingma D P,Welling M. An introduction to variational au-toencoders. Foundations and Trends Foundations and Trendsin Machine Learning in Machine Learning,2019,vol. 12, no. 4,pp.307-392.
Yu L,Twardowski B, Liu XL, Herranz L,Wang K, Cheng YM, et al.Semantic driftcompensation for class-incremental learning. Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition(CVPR). Seattle.USA:IEEE,2020,pp.6980-6989.
Zhu F, Zhang X Y, Wang C, Yin F, Liu CL. Prototype augmentation and self-supervision for incremental learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(OVPR). Nashville, USA: IEEE,2021,pp.5867-5876.
Liu Y Y, Schiele B,Sun QR. Adaptive aggregation networks for class-incremental learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR).Nashville,USA:IEEE,2021,pp.2544-2553.
Yan S P,Xie JW,He X M. DER: Dynamically expandable representation for class incremental learning. Proceedings of the IEEE/OVF Conference on Computer Vision and Pattern Recognition(CVPR).Nashville,USA:IEEE,2021,pp.3014-3023.
Kang B Y, Xie S N, Rohrbach M, Yan Z C, Gordo A, Feng JS,et al. Decoupling representation and classifier for long-tailed recognition.In:Proceedings of the 8th International Conference on Learning Representations Ethiopia:Open- Review.net, 2020.
Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin et al.Mixup: Beyond Empirical Risk Minimization.International Conference on Learning Representations 2018.
Sangdoo Yun, Dongyoon Han, Seong Joon Oh, et al.CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features.IEEE International Conference on Computer Vision, 2019 .
Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. Handbook of Systemic Autoimmune Diseases, 2009, vol. 1, no.4,pp. 41−60 .
Deng J, Dong W, Socher R, Li L J, Li K, Li F F. ImageNet: A large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Miami, USA: IEEE, 2009, pp. 248−255.
Hou, S.; Pan, X.; Loy, C. C.; Wang, Z.; and Lin, D. 2019. Learning a unified classifier incrementally via rebalancing. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp.831–839
He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp.770–778.
Li Z Z, Hoiem D. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, vol. 40, no.12,pp.2935−2947.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.