Efficient Training of ResNet-50 on Large Scale Datasets Using Enhanced Particle Swarm Optimization


Authors : A. W. C. K. Atugoda; S. D. Fernando

Volume/Issue : Volume 10 - 2025, Issue 11 - November


Google Scholar : https://tinyurl.com/mw96uwm4

Scribd : https://tinyurl.com/yujf2r2z

DOI : https://doi.org/10.38124/ijisrt/25nov358

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : Deep Neural Networks (DNNs) such as ResNet-50 have achieved state-of-the-art results in large-scale image classification. However, their training performance depends strongly on optimization efficiency. Conventional methods like Stochastic Gradient Descent (SGD) often struggle with slow convergence, learning-rate sensitivity, and local minima entrapment. To overcome these challenges, this study introduces an Enhanced Particle Swarm Optimization (PSO) technique that adaptively adjusts particle dynamics to maintain a better exploration-exploitation balance during training. The method was evaluated on the CIFAR-10 dataset and compared against Standard PSO and SGD under identical experimental conditions. The results demonstrate that Enhanced PSO achieves superior validation accuracy (up to 95.7%), faster convergence, and a more stable weight distribution centered near zero. These characteristics reflect a well-regularized learning process with improved generalization. Overall, the Enhanced PSO framework provides a robust and scalable optimization approach for deep neural networks, offering a viable alternative to conventional gradient-based training algorithms.

Keywords : Particle Swarm Optimization, Deep Neural Networks, ResNet-50, Convergence Stability, Weight Values Optimization.

References :

  1. Zhang H, et al. Developing a novel artificial intelligence model to estimate the capital cost of mining projects using deep neural network-based ant colony optimization algorithm. Resour Policy. 2020;66:101604. doi:10.1016/j.resourpol.2020.101604.
  2. Aje OF, Josephat AA. Global Journal of Engineering and Technology Advances. Glob J Eng Technol Adv. 2020;3(3):1-6. doi:10.30574/gjeta.
  3. Atugoda AWK, Fernando S. Improved Particle Swarm Optimization for Optimizing the Deep Convolutional Neural Network. In: Proceedings of the International Conference on Information Technology Research (ICITR); 2023 Dec 8–9; Moratuwa, Sri Lanka. IEEE; 2023. p. 45–50.
  4. Zhao X, Li J, Liu Y, Wang C. An efficient swarm intelligence approach to the optimization on high-dimensional problems. Front Comput Neurosci. 2024;18:1283974. doi:10.3389/fncom.2024.1283974.
  5. Zhang Y, Wang S, Ji G. Swarm intelligence and its applications. Sci World J. 2013;2013:528069. doi:10.1155/2013/528069.
  6. Nickabadi A, Ebadzadeh MM, Safabakhsh R. A novel particle swarm optimization algorithm with adaptive inertia weight. Appl Soft Comput J. 2011;11(4):3658-70. doi:10.1016/j.asoc.2011.01.037.
  7. Kennedy J, Eberhart R. Particle swarm optimization. Proc IEEE Int Conf Neural Netw. 1995;4:1942-8. doi:10.1109/ICNN.1995.488968.
  8. Gad AG. Particle swarm optimization algorithm and its applications: A systematic review. Arch Comput Methods Eng. 2022;29(5):2531-61. doi:10.1007/s11831-021-09588-4.
  9. Banks A, Vincent J, Anyakoha C. A review of particle swarm optimization. Part I: Background and development. Nat Comput. 2007;6(4):467-84.2. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–444.
  10. Yang XS. Nature-Inspired Optimization Algorithms. Oxford: Elsevier; 2014.
  11. Serizawa T, Fujita H. Optimization of convolutional neural network using the linearly decreasing weight particle swarm optimization. arXiv preprint arXiv:2001.05670. 2020.

Deep Neural Networks (DNNs) such as ResNet-50 have achieved state-of-the-art results in large-scale image classification. However, their training performance depends strongly on optimization efficiency. Conventional methods like Stochastic Gradient Descent (SGD) often struggle with slow convergence, learning-rate sensitivity, and local minima entrapment. To overcome these challenges, this study introduces an Enhanced Particle Swarm Optimization (PSO) technique that adaptively adjusts particle dynamics to maintain a better exploration-exploitation balance during training. The method was evaluated on the CIFAR-10 dataset and compared against Standard PSO and SGD under identical experimental conditions. The results demonstrate that Enhanced PSO achieves superior validation accuracy (up to 95.7%), faster convergence, and a more stable weight distribution centered near zero. These characteristics reflect a well-regularized learning process with improved generalization. Overall, the Enhanced PSO framework provides a robust and scalable optimization approach for deep neural networks, offering a viable alternative to conventional gradient-based training algorithms.

Keywords : Particle Swarm Optimization, Deep Neural Networks, ResNet-50, Convergence Stability, Weight Values Optimization.

CALL FOR PAPERS


Paper Submission Last Date
30 - November - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe