Real-Time AI Sign Language Interpreter


Authors : Abiram R; Vikneshkumar D; Abhishek E T; Bhuvaneshwari S; Joyshree K

Volume/Issue : Volume 10 - 2025, Issue 4 - April


Google Scholar : https://tinyurl.com/4d3w5bjj

Scribd : https://tinyurl.com/d9ka5kb3

DOI : https://doi.org/10.38124/ijisrt/25apr877

Google Scholar

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 15 to 20 days to display the article.


Abstract : Hearing loss and communication challenges impact the lives of millions of individuals, particularly those who are Deaf and hard of hearing. 43 million Indians and 466 million people worldwide suffer from debilitating hearing loss, according to the World Health Organization (WHO). This group struggles to find work, healthcare, and education in India. Given initiatives like the National Policy for Persons with Disabilities and the Right of Persons with Disabilities Act, there are still gaps in ensuring full inclusion. By 2050, an estimated 2.5 billion individuals would have hearing loss, requiring 700 million people to undergo hearing rehabilitation, according to WHO estimates an extra 1 billion youths are at risk for unintentional hearing loss due to unsafe listening practices. By bridging the gap between Deaf people and the general communication world, our project, the Real-Time Sign Language Interpreter, aims to overcome these obstacles. This innovative technology enables an uninterrupted communication by instantly translating hand movements into text and then speech using AI and machine learning. Our project provides the equivalent of Beyond: Communication access for people from the Deaf community, enabling greater participation in education, employment, and social life. Harnessing this technology can do a lot with relatively low investment which, in turn, can provide an immense social return by making services available to everyone, regardless of background or circumstances.

Keywords : Sign Language Recognition, Gesture Recognition, Machine Learning, Computer Vision, AI.

References :

  1. Siddhant Dani et al., "Survey on the use of CNN and Deep Learning in Image Classification", 2021. https://scholar.google.com/scholar
  2. Michele. Russo, "AR in the Architecture Domain: State of the Art", Applied Sciences, vol. 11, no. 15, 2021. https://doi.org/10.3390/app11156800
  3. Agnieszka Mikołajczyk and Michał Grochowski, "Data augmentation for improving deep learning in image classification problem", 2018 international interdisciplinary PhD workshop (IIPhDW). IEEE, 2018. https://ieeexplore.ieee.org/document/8388338
  4. Moniruzzaman Bhuiyan and Rich Picking, "Gesture-controlled user interfaces what have we done and what's next", Proceedings of the fifth collaborative research symposium on security E-Learning Internet and Networking (SEIN 2009), 2009.
  5. Tianmei Guo et al., "Simple convolutional neural network on image classification", 2017 IEEE 2nd International Conference on Big Data Analysis (ICBDA). IEEE, 2017. https://ieeexplore.ieee.org/document/8078730
  6. Salima Hassairi, Ridha Ejbali and Mourad Zaied, "A deep stacked wavelet auto-encoders to supervised feature extraction to pattern classification", Multimedia Tools and applications, vol. 77, no. 5, pp. 5443-5459, 2018. https://doi.org/10.1007/s11042-017-4461-z
  7. Fifth Generation Computer Corporation-"Speaker Independent Connected Speech Recognition.

Hearing loss and communication challenges impact the lives of millions of individuals, particularly those who are Deaf and hard of hearing. 43 million Indians and 466 million people worldwide suffer from debilitating hearing loss, according to the World Health Organization (WHO). This group struggles to find work, healthcare, and education in India. Given initiatives like the National Policy for Persons with Disabilities and the Right of Persons with Disabilities Act, there are still gaps in ensuring full inclusion. By 2050, an estimated 2.5 billion individuals would have hearing loss, requiring 700 million people to undergo hearing rehabilitation, according to WHO estimates an extra 1 billion youths are at risk for unintentional hearing loss due to unsafe listening practices. By bridging the gap between Deaf people and the general communication world, our project, the Real-Time Sign Language Interpreter, aims to overcome these obstacles. This innovative technology enables an uninterrupted communication by instantly translating hand movements into text and then speech using AI and machine learning. Our project provides the equivalent of Beyond: Communication access for people from the Deaf community, enabling greater participation in education, employment, and social life. Harnessing this technology can do a lot with relatively low investment which, in turn, can provide an immense social return by making services available to everyone, regardless of background or circumstances.

Keywords : Sign Language Recognition, Gesture Recognition, Machine Learning, Computer Vision, AI.

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe