Authors :
Pratham Kumar; Rishav Sharma; Shruti; Jaya Shree; Virat Tiwari; Varun Mishra; Soumya Upadhyay
Volume/Issue :
Volume 9 - 2024, Issue 4 - April
Google Scholar :
https://tinyurl.com/yck5esu4
Scribd :
https://tinyurl.com/3au8d2eu
DOI :
https://doi.org/10.38124/ijisrt/IJISRT24APR246
Abstract :
Moving hands in air and watching your screen
writes for you and at the same time making different
Signs with hands and the system displays it as well as
speak what Sign you are making.This seems to be very
futuristic approach towards the realm of Image
Processing and Gesture Recognition . In this paper, we
present a very interesting and a novel approach towards
an interactive learning platform where one can draw the
content on screen while moving their hand in air and can
also use hand sign language to communicate with an ease
with the Hearing Impaired and Dumb Community. Our
system combines both technologies to create a smooth and
engaging experience for users. It can be used in
interactive art setups or virtual reality setups. Air canvas
enables users to draw and manipulate digital content in
mid-air with object tracking using Computer Vision and
Mediapipe framework, while hand gesture recognition
allows for real-time interpretation of Hand Signs to
perform actions or commands within the system. This
Model not only recognizes the sign but also speaks it loud
using pyttsx3 a text-to-speech conversion Library,
ensuring a good communication between a normal
human and people with Non-Verbal and Hearing
Impaired disability. To enhance the performance of the
model We validate the model with a real dataset trained
by us. This training was essential for refining the
accuracy and efficiency of the model.
Keywords :
Air Canvas, Image Processing, Gesture Recognition, Real-Time, Virtual Reality, Computer Vision, Mediapipe, Pyttsx3, Dataset.
Moving hands in air and watching your screen
writes for you and at the same time making different
Signs with hands and the system displays it as well as
speak what Sign you are making.This seems to be very
futuristic approach towards the realm of Image
Processing and Gesture Recognition . In this paper, we
present a very interesting and a novel approach towards
an interactive learning platform where one can draw the
content on screen while moving their hand in air and can
also use hand sign language to communicate with an ease
with the Hearing Impaired and Dumb Community. Our
system combines both technologies to create a smooth and
engaging experience for users. It can be used in
interactive art setups or virtual reality setups. Air canvas
enables users to draw and manipulate digital content in
mid-air with object tracking using Computer Vision and
Mediapipe framework, while hand gesture recognition
allows for real-time interpretation of Hand Signs to
perform actions or commands within the system. This
Model not only recognizes the sign but also speaks it loud
using pyttsx3 a text-to-speech conversion Library,
ensuring a good communication between a normal
human and people with Non-Verbal and Hearing
Impaired disability. To enhance the performance of the
model We validate the model with a real dataset trained
by us. This training was essential for refining the
accuracy and efficiency of the model.
Keywords :
Air Canvas, Image Processing, Gesture Recognition, Real-Time, Virtual Reality, Computer Vision, Mediapipe, Pyttsx3, Dataset.