Authors :
Kailash Kumar Bharaskar; Dharmendra Gupta; Vivek Kumar Gupta; Rachit Pandya; Rachit Jain
Volume/Issue :
Volume 10 - 2025, Issue 4 - April
Google Scholar :
https://tinyurl.com/ybt4y5hn
Scribd :
https://tinyurl.com/ypajrdp8
DOI :
https://doi.org/10.38124/ijisrt/25apr2374
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Sign language is a vital means of communication for millions, yet technological barriers still limit accessibility.
To address this, we analyzed the existing deep learning model and identified key areas for enhancement. We expanded the
neural network to improve learning capacity, replaced ReLU with LeakyReLU to avoid inactive neurons, and added batch
normalization to maintain gradient stability throughout training. To reduce overfitting while preserving performance, we
fine-tuned the dropout layer. We also improved preprocessing to filter out background noise, enhancing the system’s
ability to accurately track gestures. The training process was accelerated using early stopping and model checkpointing in
order to save the best-performing version possible without incurring unnecessary computation. The final leg was
converting the model to run in TensorFlow Lite, so that it would be able to run efficiently on mobile and edge devices and
hence making its real-world deployment possible. The results were demonstrative; greatly improved accuracy, enhanced
stability, and decent real-time performance. With confusion matrices and ROC curves backing it, the improvement is
measurable. But more importantly, this project is about inclusivity—what it means to bring people into technology more
finely on behalf of the community.
References :
- Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
- Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint.
- Vaswani, A., et al. (2017). Attention is All You Need. Advances in Neural Information Processing Systems.\
- Abeer et al., "Deep Learning for Sign Language Recognition: Current Techniques, Benchmarks, and Open Issues," ResearchGate, 2021.ResearchGate
- SLR model-https://github.com/CodingSamrat/Sign-Language-Recognition
Sign language is a vital means of communication for millions, yet technological barriers still limit accessibility.
To address this, we analyzed the existing deep learning model and identified key areas for enhancement. We expanded the
neural network to improve learning capacity, replaced ReLU with LeakyReLU to avoid inactive neurons, and added batch
normalization to maintain gradient stability throughout training. To reduce overfitting while preserving performance, we
fine-tuned the dropout layer. We also improved preprocessing to filter out background noise, enhancing the system’s
ability to accurately track gestures. The training process was accelerated using early stopping and model checkpointing in
order to save the best-performing version possible without incurring unnecessary computation. The final leg was
converting the model to run in TensorFlow Lite, so that it would be able to run efficiently on mobile and edge devices and
hence making its real-world deployment possible. The results were demonstrative; greatly improved accuracy, enhanced
stability, and decent real-time performance. With confusion matrices and ROC curves backing it, the improvement is
measurable. But more importantly, this project is about inclusivity—what it means to bring people into technology more
finely on behalf of the community.