Authors :
Gayatri Gangeshkumar Waghmare; Sakshee Satish Yande; Rajesh Dattatray Tekawade; Dr. Chetan Aher
Volume/Issue :
Volume 10 - 2025, Issue 4 - April
Google Scholar :
https://tinyurl.com/2bykermw
Scribd :
https://tinyurl.com/yye5u273
DOI :
https://doi.org/10.38124/ijisrt/25apr1474
Google Scholar
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 15 to 20 days to display the article.
Abstract :
The point of this paper is to plan a user-friendly framework that’s accommodating for the individuals who have
hearing troubles. Sign dialect serves as a imperative communication device for people with hearing and discourse
impedances. Be that as it may, the need of broad understanding of sign dialect makes boundaries between the hard of
hearing community and the common open. This paper presents a real-time sign dialect interpretation framework that
changes over signals into content and discourse utilizing progressed machine learning procedures. For those who are hard
of hearing and discourse impaired, sign language may be a required mode of communication. Communication
impediments are caused by the restricted information of sign dialect. This study examines how information science
strategies can be utilized to shut this hole by interpreting sign dialect developments into discourse.
The method comprises of three steps: recognizing hand signals utilizing American Sign Dialect (ASL), capturing them
employing a webcam, and interpreting the recognized content to discourse utilizing Google Text-to-Speech (GTS) union.
The framework is centered on conveying an successful real-time communication framework through the utilize of
convolutional neural systems (CNNs) in signal acknowledgment. The extend utilizes a machine learning pipeline that
comprises of information collection, preprocessing, demonstrate preparing, real-time discovery, and discourse blend. This
paper will endeavor to detail diverse strategies, challenges, and future headings for sign dialect to discourse change, and
the part played by information science in making communication more open.
Keywords :
Sign Language Recognition, CNN, Text-to-Speech, Real-Time Translation, American Sign Language (ASL), Deep Learning, Image Classification.
References :
- World Health Organization. (2021). Deafness and hearing loss.
- Sharma, P. et al. (2022). Translating Speech to Indian Sign Language. Future Internet, 14(9), 253.
- Garg, H. and Aggarwal, R. (2020). Real-Time ASL Detection. JATIT.
- Sakib, S. et al. (2019). Hybrid CNN-LSTM for Bangla SL. ICCIT.
- Adithya, S. et al. (2021). Deep Learning for ISL. IJERT.
- Shukla, A. and Pandey, R. (2021). Glove-based Recognition. IJSRCSEIT.
- Ojha, A. et al. (2020). Real-Time SL Translation. IJERT.
- Vaithilingam, G. (2001). Sign Language to Speech Converting Method. WO 01/59741 A1.
- Chaudhary, A. et al. (2021). CNN Based ISL Recognition. IJCA.
- Buckley, N. et al. (2021). CNN-Based SL System with Single/Double-Handed Gestures. COMPSAC, pp. 1040 to 1045.
- Arsan, T. and Ulgen, O. (2015). Sign Language Converter.¨ International Journal of Computer Science & Engineering Survey (IJCSES), 6(4), 39–51.
- Vijayalakshmi, P. and Aarthi, M. (2016). Sign language to speech conversion. 2016 International Conference on Recent Trends in Information Technology (ICRTIT), Chennai, India, pp. 1–6.
- Abraham, A. and Rohini, V. (2018). Real time conversion of sign language to speech and prediction of gestures using Artificial Neural Network. Procedia Computer Science, 143, 587–594.0
- Kumar, M. N. B. (2018). Conversion of Sign Language into Text. IJ Applied Engineering Research, 13(9), 7154–7161.
- Duraisamy, P. et al. (2023). Transforming Sign Language into Text and Speech. IJ Science and Technology, 16(45), 4177–4185.
- Papatsimouli, M. et al. (2022). Real Time Sign Language Translation Systems: A review. MOCAST, pp. 1–6.
- Pathan, R. K. et al. (2023). Sign Language Recognition Using CNN and Hand Landmarks. Scientific Reports, 13, 16975.
- Jebakani, C. and Rishitha, S. P. (2022). Sign Language to Speech/Text Using CNN. BE Thesis, Sathyabama Institute.
- Sharma, P. et al. (2022). Speech to ISL Using NLP. Future Internet, 14(9), 253.
- Joksimoski, B. et al. (2022). Tech Solutions for Sign Language Recognition. IEEE Access, 10, 40979–41025.
The point of this paper is to plan a user-friendly framework that’s accommodating for the individuals who have
hearing troubles. Sign dialect serves as a imperative communication device for people with hearing and discourse
impedances. Be that as it may, the need of broad understanding of sign dialect makes boundaries between the hard of
hearing community and the common open. This paper presents a real-time sign dialect interpretation framework that
changes over signals into content and discourse utilizing progressed machine learning procedures. For those who are hard
of hearing and discourse impaired, sign language may be a required mode of communication. Communication
impediments are caused by the restricted information of sign dialect. This study examines how information science
strategies can be utilized to shut this hole by interpreting sign dialect developments into discourse.
The method comprises of three steps: recognizing hand signals utilizing American Sign Dialect (ASL), capturing them
employing a webcam, and interpreting the recognized content to discourse utilizing Google Text-to-Speech (GTS) union.
The framework is centered on conveying an successful real-time communication framework through the utilize of
convolutional neural systems (CNNs) in signal acknowledgment. The extend utilizes a machine learning pipeline that
comprises of information collection, preprocessing, demonstrate preparing, real-time discovery, and discourse blend. This
paper will endeavor to detail diverse strategies, challenges, and future headings for sign dialect to discourse change, and
the part played by information science in making communication more open.
Keywords :
Sign Language Recognition, CNN, Text-to-Speech, Real-Time Translation, American Sign Language (ASL), Deep Learning, Image Classification.