Real Time Conversion of English Audio to American Sign Language and Vice Versa


Authors : Manasa Mandapati

Volume/Issue : Volume 10 - 2025, Issue 9 - September


Google Scholar : https://tinyurl.com/bd5mhemr

Scribd : https://tinyurl.com/29wznkm4

DOI : https://doi.org/10.38124/ijisrt/25sep099

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : The hearing-impaired people form an ample community with required needs that technologists have started to address. There is no device to date that can convert audio to sign language and sign language to audio in real-time. This problem can be handled using an interpreter system for speech to sign language to translate English speech to American Sign Language video in real time[2]. Similarly the sign language is translated to speech using a device with arduino board and flex sensors. Gestures made by the wearer are detected using sensors, and as per the pre-defined conditions for numerous values generated by sensors, corresponding messages were sent to the Android device using Global System for Mobile (GSM) [4], which will convert these text messages to speech.

Keywords : Sign Language, Speech Recognition, Flex Sensors, Gestures, Speech to Text, Text to Speech.

References :

  1. American Sign Language Video Dictionary and Inflection Guide. (2000). [CD-ROM]. New York: US. National Technical Institute for the Deaf, Rochester Institute of technology. ISBN: 0-9720942-0-2
  2. ASL University. Finferspelling: Introduction. http://www.lifeprint.com/asl101/fingerspelling/fingerspelling.
  3. Baker, J.K. (1975). The DRAGON System-An Overview. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-23(1). pp.24-29.
  4. Becchetti, C., Ricotti, L. R. (1999). Speech Recognition Theory and C++ Implementation. England: Wiley.
  5. Bornstein, H., Saulnier, K.L. Hamilton, L.B. (1992). The Comprehensive Signed English Dictionary (Sixth printing). USA: Washington DC, The Signed English series. Clerc Books, Gallaudet University Pres.
  6. Gouveˆa, E. The CMU Sphinx Group Open Source Speech Recognition Engines. http://www.speech.cs.cmu.edu/sphinx/
  7. Harrington, T. (July, 2004). Statistics: Deaf Population of the US. http://library.gallaudet.edu/dr/faq-statistics-deafus.html
  8. Huang, X., Acero, A., Hon, H-W., Reddy, R. (2001). Spoken Language processing, a Guide to Theory, Algorithm and System Development. Prentice Hall PTR
  9. Hwang, Mei-Yuh. (1993). Subphonetic Acoustic Modeling for Speaker Independent Continuous Speech Recognition. Ph.D. thesis, Computer Science Department, Carnegie Mellon University. Tech Report No. CMU-CS-93-230
  10. iCommunicator TM pricing (2003). http://www.myicommunicator.com/?action=pricing
  11. Jelinek, F. (Apr. 1976). Continuous Speech Recognition by Statistical Methods. Proceedings of the IEEE, Vol. 64, No. 4. pp. 532-556.
  12. Ravishankar, M. (May 1996) Efficient Algorithms for Speech Recognition. Ph.D. dissertation, Carnegie Mellon University. Tech Report. No. CMU-CS-96 143.
  13. Ravishankar,  M.  K.  (2004).  Sphinx-3  s3.X  Decoder  (X=5).  Sphinx  Speech  Group.  School  of  Computer  Science,  CMU. http://cmusphinx.sourceforge.net/sphinx3/
  14. Rosenfeld, R. The CMU Statistical Language Modeling (SLM) Toolkit, http://www.speech.cs.cmu.edu/SLMi n ƒ o.ktml

The hearing-impaired people form an ample community with required needs that technologists have started to address. There is no device to date that can convert audio to sign language and sign language to audio in real-time. This problem can be handled using an interpreter system for speech to sign language to translate English speech to American Sign Language video in real time[2]. Similarly the sign language is translated to speech using a device with arduino board and flex sensors. Gestures made by the wearer are detected using sensors, and as per the pre-defined conditions for numerous values generated by sensors, corresponding messages were sent to the Android device using Global System for Mobile (GSM) [4], which will convert these text messages to speech.

Keywords : Sign Language, Speech Recognition, Flex Sensors, Gestures, Speech to Text, Text to Speech.

CALL FOR PAPERS


Paper Submission Last Date
31 - December - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe