Authors :
Nayana Prashanth; Dr. Prathapchandra
Volume/Issue :
Volume 10 - 2025, Issue 7 - July
Google Scholar :
https://tinyurl.com/3d799vek
DOI :
https://doi.org/10.38124/ijisrt/25jul230
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 30 to 40 days to display the article.
Abstract :
The AI Air Gesture Draw seeks to revolutionize human-computer interaction by removing the dependence on
physical input devices. This cutting-edge system allows users to draw in mid-air using intuitive hand gestures, which are
captured in real time by a standard webcam and interpreted through sophisticated deep learning models. Utilizing Google’s
QuickDraw dataset and implemented with TensorFlow, the system is trained to accurately recognize a diverse array of
hand-drawn patterns. By integrating computer vision and gesture recognition, it provides a seamless and touchless drawing
experience. With significan t applications in education, accessibility, and digital design, this technology creates new
opportunities for inclusive and intuitive interfaces, particularly aiding users with limited mobility or those in settings where
touch-based input is unfeasible.
Keywords :
Air Gesture Recognition, Artificial Intelligence, Convolutional Neural Network, Recurrent Neural Network, Human- Computer Interaction, Deep Learning, TensorFlow.
References :
- N. Hendy, H. M. Fayek, and A. Al-Hourani, “Deep Learning Approaches for Air-Writing Using Single UWB Radar” IEEE Sensors J., vol.22, no. 12, Jun. 2022.
- S. Ahmed, D. Wang, J.-Y. Park, and S. H.Cho, “UWB- Gestures, A Public Dataset Of Dynamic Hand Gestures Acquired Using Impulse Radar Sensors” Data Science, vol. 8,
- 2021.
- Das, A., Gawde, S., Suratwala, K., Kalbande, D. “Sign Language Recognition Using Deep Learning on Custom Processed Static Gesture Images”. International Conference on Smart City and Emerging Technology (ICSCET), (2018).
- V. K. Verma, S. Srivastava, and N. Kumar, “A Comprehensive Review on Automation of Indian Sign Language,” IEEE Int. Conference, Advanced Computer Engineering Application, Mar 2015.
- Saira Beg, M. Fahad Khan and Faisal Baig, “Text Writing
- in Air”, Journal of Information Display, vol 14, 2013.
- R. Z. Khan, “Comparative Study of Hand Gesture Recognition System” Computer Science and Information Technology (CS & IT), vol 4, 2012.
The AI Air Gesture Draw seeks to revolutionize human-computer interaction by removing the dependence on
physical input devices. This cutting-edge system allows users to draw in mid-air using intuitive hand gestures, which are
captured in real time by a standard webcam and interpreted through sophisticated deep learning models. Utilizing Google’s
QuickDraw dataset and implemented with TensorFlow, the system is trained to accurately recognize a diverse array of
hand-drawn patterns. By integrating computer vision and gesture recognition, it provides a seamless and touchless drawing
experience. With significan t applications in education, accessibility, and digital design, this technology creates new
opportunities for inclusive and intuitive interfaces, particularly aiding users with limited mobility or those in settings where
touch-based input is unfeasible.
Keywords :
Air Gesture Recognition, Artificial Intelligence, Convolutional Neural Network, Recurrent Neural Network, Human- Computer Interaction, Deep Learning, TensorFlow.