⚠ Official Notice: www.ijisrt.com is the official website of the International Journal of Innovative Science and Research Technology (IJISRT) Journal for research paper submission and publication. Please beware of fake or duplicate websites using the IJISRT name.



A Survey on Advancements in Transfer and Continual Learning: Insights for Modern Computer Vision


Authors : Hemanth Sai Kosari; Deeksha Akkati

Volume/Issue : Volume 11 - 2026, Issue 3 - March


Google Scholar : https://tinyurl.com/2vn68j9m

Scribd : https://tinyurl.com/43hdtcun

DOI : https://doi.org/10.38124/ijisrt/26mar843

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Transfer learning and continual learning are pivotal methodologies in current artificial intelligence, offering solutions to enhance computer vision systems. Transfer learning utilizes pretrained models to perform specific tasks efficiently with limited data, while continual learning allows systems to learn new tasks incrementally without forgetting prior knowledge. Vision Transformers (ViTs), leveraging attention mechanisms, have significantly advanced feature representation and task performance in image classification and object detection, outperforming traditional convolutional networks. Despite these advancements, challenges like domain adaptation and catastrophic forgetting remain critical to solve. This paper reviews techniques including fine-tuning, Elastic Weight Consolidation (EWC), and selfsupervised learning, highlighting their applications in fields such as autonomous driving and medical imaging that are closely related to computer vision. It identifies research gaps and provides insights into creating scalable and robust computer vision solutions.

References :

  1. F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, and Q. He, "A Comprehensive Survey on Transfer Learning," arXiv preprint arXiv:1911.02685, 2019. doi: 10.48550/arXiv.1911.02685.
  2. L. Wang, X. Zhang, H. Su, and J. Zhu, "A Comprehensive Survey of Continual Learning: Theory, Method and Application," arXiv preprint arXiv:2302.00487, 2023. doi: 10.48550/arXiv.2302.00487.
  3. J. Gui, T. Chen, J. Zhang, Q. Cao, Z. Sun, H. Luo, and D. Tao, "A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends," arXiv preprint arXiv:2301.05712, 2023. doi:10.48550/arXiv.2301.05712.
  4. J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, "Overcoming catastrophic forgetting in neural networks," arXiv preprint arXiv:1612.00796, 2016. doi:10.48550/arXiv.1612.00796.
  5. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, "Domain-Adversarial Training of Neural Networks," Journal of Machine Learning Research, vol. 17, pp. 135, 2016. doi: 10.48550/arXiv.1505.07818.
  6. A. Hosna, E. Merry, and J. Gyalmo, "Transfer learning: a friendly introduction," Journal of Big Data, vol. 9, no. 102, 2022. doi: 10.1186/s40537022-00652-w.
  7. H. M. Fayek, L. Cavedon, and H. R. Wu, "Progressive learning: A deep learning framework for continual learning," Neural Networks, vol. 128, pp. 345-357, 2020. doi: 10.1016/j.neunet.2020.05.011.
  8. A. Parvaiz, M. A. Khalid, R. Zafar, H. Ameer, M. Ali, and M. M. Fraz, "Vision Transformers in medical computer vision—A contemplative retrospection," Engineering Applications of Artificial Intelligence, vol. 120, 2023. doi: 10.1016/j.engappai.2023.106126.
  9. J. Kang, R. Fernandez-Beltran, P. Duan, S. Liu, and A. J. Plaza, "Deep Unsupervised Embedding for Remotely Sensed Images Based on Spatially Augmented Momentum Contrast," IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 3, pp. 2598-2610, Mar. 2021. doi: 10.1109/TGRS.2020.3007029.
  10. Y. Zhou, X. Tian, C. Zhang, Y. Zhao, and T. Li, "Elastic weight consolidation-based adaptive neural networks for dynamic building energy load prediction modeling," Energy and Buildings, vol. 265, p. 112098, 2022. doi: 10.1016/j.enbuild.2022.112098.
  11. D. Kudithipudi et al., “Biological underpinnings for lifelong learning machines,” Nat. Mach. Intell., vol. 4, pp. 196–210, 2022.
  12. G. I. Parisi et al., “Continual lifelong learning with neural networks: A review,” Neural Netw., vol. 113, pp. 54–71, 2019.
  13. R. Hadsel et al., “Embracing change: Continual learning in deep neural networks,” Trends Cogn. Sci., vol. 24, pp. 1028–1040, 2020.
  14. G. M. Van de Ven and A. S. Tolias, “Three scenarios for continual learning,” 2019, arXiv:1904.07734.
  15. P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, pp. 898–916, 2011.
  16. A. Babenko, A. Slesarev, A. Chigorin, and V. S. Lempitsky, “Neural codes for image retrieval,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2014, pp. 584–599.
  17. M. Baktashmotlagh, M. T. Harandi, B. C. Lovell, and M. Salzmann, “Unsupervised domain adaptation by domain invariant projection,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2013, pp. 769–776.
  18. S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira, “Analysis of representations for domain adaptation,” in Proc. Neural Inf. Process. Syst. (NIPS), 2006, pp. 137–144.
  19. S. Kundu et al., "Quantum computation: From Church-Turing thesis to Qubits," 2016 IEEE 7th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), New York, NY, 2016, pp.15. doi: 10.1109/UEMCON.2016.7777805.
  20. E. Alpaydin, Introduction to machine learning (The MIT Press, Cambridge, UK, 2014), Ed. 3, Chap. 1 and 11.
  21. S. Shalev-Shwartz and S. Ben-David, Understanding Machine Learning from Theory to Algorithms (Cambridge University Press, Cambridge, UK, 2014), Chap.1.
  22. M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, UK, 2000), Chap. 1 and 6.
  23. R. P. Feynman, Simulating Physics with Computers, Int. J. Theor. Phys. 21, 467 (1982).
  24. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.
  25. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE, Jun. 2016, pp. 770– 778. doi: 10.1109/CVPR.2016.90.
  26. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA: IEEE, Jun.2014, pp.580–587. doi: 10.1109/CVPR.2014.81.
  27. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards RealTime Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi: 10.1109/TPAMI.2016.2577031.

Transfer learning and continual learning are pivotal methodologies in current artificial intelligence, offering solutions to enhance computer vision systems. Transfer learning utilizes pretrained models to perform specific tasks efficiently with limited data, while continual learning allows systems to learn new tasks incrementally without forgetting prior knowledge. Vision Transformers (ViTs), leveraging attention mechanisms, have significantly advanced feature representation and task performance in image classification and object detection, outperforming traditional convolutional networks. Despite these advancements, challenges like domain adaptation and catastrophic forgetting remain critical to solve. This paper reviews techniques including fine-tuning, Elastic Weight Consolidation (EWC), and selfsupervised learning, highlighting their applications in fields such as autonomous driving and medical imaging that are closely related to computer vision. It identifies research gaps and provides insights into creating scalable and robust computer vision solutions.

Paper Submission Last Date
31 - March - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe