Sign Language Recognition System Using DL-CNN Model Using VGG16 and Image Net with Mobile Application


Authors : S Asrita Sreekari; Bathi Venkata Varaha Durga Yamini; Somayajula Venkata Thanmayi Sri; Maram Naga Sireesha

Volume/Issue : Volume 9 - 2024, Issue 5 - May

Google Scholar : https://tinyurl.com/395yk2ca

Scribd : https://tinyurl.com/muaadrvn

DOI : https://doi.org/10.38124/ijisrt/IJISRT24MAY1338

Abstract : In this project, a Deep Learning Convolutional Neural Network (DL-CNN) model trained on ImageNet and based on VGG16 is used to develop a Sign Language Recognition System incorporated into a mobile application. The technology recognizes a variety of hand gestures and movements that are inherent in sign language, allowing for real-time interpretation of sign language gestures that are recorded by the device's camera. Users can simply interact with the system by capturing motions in sign language and obtaining corresponding written or aural outputs for better communication through the app interface. Through improving accessibility and inclusivity for people with hearing loss, this project seeks to close gaps and promote understanding through technology by facilitating seamless communication in a variety of settings.

Keywords : VGG16, ImageNet, Convolution Neural Networks, Mobile Application.

References :

  1. Li, D., Zhang, H., Liu, Y., & Du, Y. (2022). Real- time American Sign Language recognition using convolutional neural networks on embedded platforms. IEEE Access, 7, 159465-159475.
  2. Puertas, E., Jara, C. A., & Pomares, J. (2020). Sign language recognition through machine learning: current state of the art and challenges. Sensors, 19(20), 4400.
  3. Starner, T., & Pentland, A. (2019). Real-time American Sign Language recognition from video using hidden Markov models. Technical Report #357, MIT Media Laboratory Perceptual Computing Section.
  1. Sharma, A., Sawant, S., & Singhal, S. (2020). Sign language recognition using deep learning techniques: A systematic review. International Journal of Machine Learning and Cybernetics, 11(7), 1623-1650.
  2. Chen, L., Han, Y., & Gao, S. (2020). A sign language recognition method based on deep learning. Multimedia Tools and Applications, 79(9- 10), 5719-5736.
  3. Drahansky, M., Klepal, M., & Hunka, F. (2019). Real-time sign language recognition system based on deep neural networks. In 2019 International Conference on Applied Electronics (AE) (pp. 1-4). IEEE.
  4. Hassani, N. H., & Arifin, A. (2020). Real-time American Sign Language recognition system using machine learning. International Journal of Electrical and Computer Engineering (IJECE), 10(5), 4691-4700.
  5. Huang, X., & Zhang, W. (2018). Sign language recognition based on a convolutional neural network. IEEE Access, 6, 41819-41827.
  6. Hwang, S. W., & Kim, H. J. (2017). Sign language recognition using recurrent neural networks with conditional random fields. Applied Sciences, 7(12), 1312.
  7. Tavares, A., & Dias, M. S. (2016). Real-time sign language recognition systems: A review. Expert Systems with Applications, 65, 259-273.
  8. Jumaah, F. M., & Abdulkareem, K. H. (2020). Real-time Arabic sign language recognition using machine learning techniques. IEEE Access, 8, 221862-221874.
  9. Krejcar, O., & Jan, J. (2016). Sign language recognition in videos with multiple instance learning. International Journal of Machine Learning and Cybernetics, 7(3), 397-408.
  10. Kowsari, K., Heidarysafa, M., Brown, D. E., Meimandi, K. J., & Barnes, L. E. (2019). A text mining approach for capturing temporal and trends of scientific research: An empirical case study using Medical Research papers. Expert Systems with Applications, 124, 60-73.
  11. Yan, Y., & Wang, C. (2019). Sign language recognition system using the Kinect sensor and a convolutional neural network. IEEE Access, 7, 58919-58927.
  12. Zeinali, Y., Harandi, M. T., & Lovell, B. C. (2018). Sign language recognition using 3D convolutional neural networks. IEEE Transactions on Human- Machine Systems, 49(5), 463-474.

In this project, a Deep Learning Convolutional Neural Network (DL-CNN) model trained on ImageNet and based on VGG16 is used to develop a Sign Language Recognition System incorporated into a mobile application. The technology recognizes a variety of hand gestures and movements that are inherent in sign language, allowing for real-time interpretation of sign language gestures that are recorded by the device's camera. Users can simply interact with the system by capturing motions in sign language and obtaining corresponding written or aural outputs for better communication through the app interface. Through improving accessibility and inclusivity for people with hearing loss, this project seeks to close gaps and promote understanding through technology by facilitating seamless communication in a variety of settings.

Keywords : VGG16, ImageNet, Convolution Neural Networks, Mobile Application.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe