⚠ Official Notice: www.ijisrt.com is the official website of the International Journal of Innovative Science and Research Technology (IJISRT) Journal for research paper submission and publication. Please beware of fake or duplicate websites using the IJISRT name.



Intelligent Hand Gesture Recognition System for Automated Sign Language Translation


Authors : Saniya Pathan; Swaraj Nangare; Riya Patil; Sakshi Renuse; Anagha Chaphadkar

Volume/Issue : Volume 11 - 2026, Issue 4 - April


Google Scholar : https://tinyurl.com/4s2bjua3

Scribd : https://tinyurl.com/4cfn52au

DOI : https://doi.org/10.38124/ijisrt/26apr1716

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : This paper presents a software-based Sign Language Interpreter aimed at improving communication between hearing-impaired individuals and non-signers. The system uses MediaPipe for real-time hand landmark detection and tracking, while a Convolutional Neural Network (CNN) model classifies gestures into corresponding text and speech outputs. Focused on Indian Sign Language (ISL), the model is trained on a custom dataset to ensure accuracy under diverse lighting and background conditions. By integrating deep learning with computer vision, the system achieves efficient and reliable recognition without relying on additional hardware. This research highlights the potential of AI-based software applications in fostering inclusivity and accessibility, offering an intelligent and cost-effective solution for assistive communication.

Keywords : Sign Language Interpreter, MediaPipe, CNN, Deep Learning, Gesture Recognition, Assistive Communication.

References :

  1. G. Neve, and A. C. (2021). "Real-time Hand Gesture Recognition with MediaPipe and Deep Learning." Journal of Computer Vision and Pattern Recognition.
  2. A. Kapadia and M. Shah. (2020). "Deep Learning for Sign Language Recognition: A Survey." IEEE Transactions on Pattern Analysis and Machine Intelligence.
  3. L. Wei, et al. (2022). "A Novel Approach for Indian Sign Language (ISL) Recognition using Pose and Hand Tracking." Proceedings of the International Conference on Computer Vision (ICCV).
  4. J. Amin, M. Sharif (2021). "Sign Language Recognition using 3D Convolutional Neural Networks." Journal of Medical Imaging and Health Informatics.
  5. S. Kumar, A. Choudhary. (2021). "A Lightweight CNN for Sign Language Recognition on Mobile Devices." IEEE Xplore.
  6. C. Camgoz, O. Koller, et al. (2020). "Sign Language Transformers: A New Era in CSLR." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
  7. Google AI. (2020). "MediaPipe: A Framework for Building Perception Pipelines." Google AI Blog. https://ai.googleblog.com/2019/08/mediapipe-framework-for-building.html.
  8. O. Koller, et al. (2019). "Weakly Supervised Learning with Multi-Stream CNNs for Sign Language Recognition." Proceedings of CVPR.
  9. R. P. D. (2019). "WLASL: A Large-Scale Word-Level American Sign Language Video Dataset." Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops.

This paper presents a software-based Sign Language Interpreter aimed at improving communication between hearing-impaired individuals and non-signers. The system uses MediaPipe for real-time hand landmark detection and tracking, while a Convolutional Neural Network (CNN) model classifies gestures into corresponding text and speech outputs. Focused on Indian Sign Language (ISL), the model is trained on a custom dataset to ensure accuracy under diverse lighting and background conditions. By integrating deep learning with computer vision, the system achieves efficient and reliable recognition without relying on additional hardware. This research highlights the potential of AI-based software applications in fostering inclusivity and accessibility, offering an intelligent and cost-effective solution for assistive communication.

Keywords : Sign Language Interpreter, MediaPipe, CNN, Deep Learning, Gesture Recognition, Assistive Communication.

Paper Submission Last Date
31 - May - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe