Authors :
Enesalp ÖZ; Muhammed Kürşad UÇAR
Volume/Issue :
Volume 10 - 2025, Issue 3 - March
Google Scholar :
https://tinyurl.com/3z89c749
Scribd :
https://tinyurl.com/3wmht2cp
DOI :
https://doi.org/10.38124/ijisrt/25mar609
Google Scholar
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 15 to 20 days to display the article.
Abstract :
In this study, an artificial intelligence-assisted image processing system was developed to prevent errors in part
feeding processes within an industrial robot cell. Using the YOLOv7-tiny model, accurate detection of parts was ensured,
enabling effective quality control. While PLC communication was established via the ModBus protocol, the system hardware
included an NVIDIA JETSON AGX ORIN, a BASLER acA2500-60uc camera, and a Raspberry Pi WaveShare monitor. A
total of 2400 data samples were used for model training, achieving an accuracy rate of 98.07%. The developed system
minimized human errors by preventing incorrect part feeding issues and significantly improved efficiency in production
processes. Notably, the system's superior accuracy and processing speed demonstrated its suitability for real-time
applications. In conclusion, this study highlights the effective implementation of artificial intelligence and image processing
techniques in industrial manufacturing processes.
Keywords :
Artificial Intelligence, Image Processing, YOLOv7-tiny, Industrial Automation, Part Inspection.
References :
- K. Xu, N. Ragot, and Y. Dupuis, “View Selection for Industrial Object Recognition,” in IECON Proceedings (Industrial Electronics Conference), IEEE Computer Society, 2022. doi: 10.1109/IECON49645.2022.9969121.
- A. Shrestha, N. Karki, R. Yonjan, M. Subedi, and S. Phuyal, “Automatic Object Detection and Separation for Industrial Process Automation,” in 2020 IEEE International Students’ Conference on Electrical, Electronics and Computer Science, SCEECS 2020, Institute of Electrical and Electronics Engineers Inc., Feb. 2020. doi: 10.1109/SCEECS48394.2020.195.
- C. Boukouvalas et al., “ASSIST: automatic system for surface inspection and sorting of tiles,” 1998.
- H. M. T. Abbas, U. Shakoor, M. J. Khan, M. Ahmed, and K. Khurshid, “Automated sorting and grading of agricultural products based on image processing,” in 2019 8th International Conference on Information and Communication Technologies, ICICT 2019, 2019. doi: 10.1109/ICICT47744.2019.9001971.
- J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016. doi: 10.1109/CVPR.2016.91.
- S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 6, 2017, doi: 10.1109/TPAMI.2016.2577031.
- W. Liu et al., “SSD: Single shot multibox detector,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2016. doi: 10.1007/978-3-319-46448-0_2.
- J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017. doi: 10.1109/CVPR.2017.690.
- R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2014. doi: 10.1109/CVPR.2014.81.
- O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. doi: 10.1007/978-3-319-24574-4_28.
- L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs,” IEEE Trans Pattern Anal Mach Intell, vol. 40, no. 4, 2018, doi: 10.1109/TPAMI.2017.2699184.
- X. Glorot, A. Bordes, and Y. Bengio, “Domain adaptation for large-scale sentiment classification: A deep learning approach,” in Proceedings of the 28th International Conference on Machine Learning, ICML 2011, 2011.
- A. Raghunathan, S. M. Xie, F. Yang, J. C. Duchi, and P. Liang, “Understanding and mitigating the tradeoff between robustness and accuracy,” in 37th International Conference on Machine Learning, ICML 2020, 2020.
- E. Shelhamer, J. Long, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 4, 2017, doi: 10.1109/TPAMI.2016.2572683.
- L. Binyan, W. Yanbo, C. Zhihong, L. Jiayu, and L. Junqin, “Object detection and robotic sorting system in complex industrial environment,” in Proceedings - 2017 Chinese Automation Congress, CAC 2017, 2017. doi: 10.1109/CAC.2017.8244092.
- Y. Zuo, J. Wang, and J. Song, “Application of YOLO Object Detection Network in Weld Surface Defect Detection,” in 2021 IEEE 11th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems, CYBER 2021, Institute of Electrical and Electronics Engineers Inc., Jul. 2021, pp. 704–710. doi: 10.1109/CYBER53097.2021.9588269.
- “Industrial Cameras, Technical Features, and Market,” Optik & Photonik, vol. 13, no. 1, 2018, doi: 10.1002/opph.201870108.
- P. Kaur, B. S. Khehra, and E. B. S. Mavi, “Data Augmentation for Object Detection: A Review,” in Midwest Symposium on Circuits and Systems, 2021. doi: 10.1109/MWSCAS47672.2021.9531849.
- Y. L. Ao, “Introduction to Digital Image Pre-processing and Segmentation,” in Proceedings - 2015 7th International Conference on Measuring Technology and Mechatronics Automation, ICMTMA 2015, 2015. doi: 10.1109/ICMTMA.2015.148.
- K. E. Varnima and C. Ramachandran, “Real-time Gender Identification from Face Images using you only look once (yolo),” in Proceedings of the 4th International Conference on Trends in Electronics and Informatics, ICOEI 2020, 2020. doi: 10.1109/ICOEI48184.2020.9142989.
- C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors,” 2023. doi: 10.1109/cvpr52729.2023.00721.
- S. YILMAZ, “Beyin Tümörü Tanıları İçin YOLOv7 Algoritması Tabanlı Karar Destek Sistemi Tasarımı,” Kocaeli Üniversitesi Fen Bilimleri Dergisi, vol. 6, no. 1, pp. 47–56, Jul. 2023, doi: 10.53410/koufbd.1236305.
- S. Srivastava, A. V. Divekar, C. Anilkumar, I. Naik, V. Kulkarni, and V. Pattabiraman, “Comparative analysis of deep learning image detection algorithms,” J Big Data, vol. 8, no. 1, p. 66, Dec. 2021, doi: 10.1186/s40537-021-00434-w.
In this study, an artificial intelligence-assisted image processing system was developed to prevent errors in part
feeding processes within an industrial robot cell. Using the YOLOv7-tiny model, accurate detection of parts was ensured,
enabling effective quality control. While PLC communication was established via the ModBus protocol, the system hardware
included an NVIDIA JETSON AGX ORIN, a BASLER acA2500-60uc camera, and a Raspberry Pi WaveShare monitor. A
total of 2400 data samples were used for model training, achieving an accuracy rate of 98.07%. The developed system
minimized human errors by preventing incorrect part feeding issues and significantly improved efficiency in production
processes. Notably, the system's superior accuracy and processing speed demonstrated its suitability for real-time
applications. In conclusion, this study highlights the effective implementation of artificial intelligence and image processing
techniques in industrial manufacturing processes.
Keywords :
Artificial Intelligence, Image Processing, YOLOv7-tiny, Industrial Automation, Part Inspection.