Visualizing Language: CNNs for Sign Language Recognition


Authors : Hemendra Kumar Jain; Pendyala Venkat Subash; Kotla Veera Venkata Satya Sai Narayana; Dr S Sri Harsha; Shaik Asad Ashraf

Volume/Issue : Volume 8 - 2023, Issue 11 - November

Google Scholar : https://tinyurl.com/2p8j88zd

Scribd : https://tinyurl.com/2w7ck53d

DOI : https://doi.org/10.5281/zenodo.10200393

Abstract : For the Deaf and hard of hearing people, sign language is an essential form of communication. However, because it is visual in nature, it poses special difficulties for automated detection. The use of convolutional neural networks (CNNs) for sign language gesture identification is investigated in this paper. CNNs are a viable option for understanding sign language because of their impressive performance in a variety of computer vision tasks. To prepare sign language images for training and testing with a CNN model, this study explores their preparation, which includes scaling, normalization, and grayscale conversion. Multiple convolutional and pooling layers precede dense layers for classification in this TensorFlow and Keras-built model. The model was trained and validated using a sizable dataset of sign language movements that represented a wide variety of signs. For many indications, the CNN performs well, achieving accuracy levels that are comparable to those of human recognition. It highlights how deep learning approaches can help the Deaf community communicate more effectively and overcome linguistic barriers.

Keywords : Sign Language Recognition, Convolutional Neural Networks (CNNs), Visual Communication, Deaf Community, Assistive Technology, Inclusive Communication.

For the Deaf and hard of hearing people, sign language is an essential form of communication. However, because it is visual in nature, it poses special difficulties for automated detection. The use of convolutional neural networks (CNNs) for sign language gesture identification is investigated in this paper. CNNs are a viable option for understanding sign language because of their impressive performance in a variety of computer vision tasks. To prepare sign language images for training and testing with a CNN model, this study explores their preparation, which includes scaling, normalization, and grayscale conversion. Multiple convolutional and pooling layers precede dense layers for classification in this TensorFlow and Keras-built model. The model was trained and validated using a sizable dataset of sign language movements that represented a wide variety of signs. For many indications, the CNN performs well, achieving accuracy levels that are comparable to those of human recognition. It highlights how deep learning approaches can help the Deaf community communicate more effectively and overcome linguistic barriers.

Keywords : Sign Language Recognition, Convolutional Neural Networks (CNNs), Visual Communication, Deaf Community, Assistive Technology, Inclusive Communication.

CALL FOR PAPERS


Paper Submission Last Date
31 - May - 2024

Paper Review Notification
In 1-2 Days

Paper Publishing
In 2-3 Days

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe