Authors :
Dr.Sk. Mahboob Basha; H.C.Srivalli; B.Jahnavi; C.V.Basanth
Volume/Issue :
Volume 8 - 2023, Issue 4 - April
Google Scholar :
https://bit.ly/3TmGbDi
Scribd :
https://bit.ly/3LBI64K
DOI :
https://doi.org/10.5281/zenodo.7922779
Abstract :
The deaf community communicates primarily
through the use of sign language. In general, sign
language is much more figuratively formable for
communication, which helps to advance and broaden the
conversation. The ASL is regarded as the universal sign
language, although there are numerous variations and
other sign systems used in various parts of the world.
There are fewer major ideas and concepts assigned.
There are fewer principal ideas and assigned
appearances in sign language. The main goal of this
effort is to create a system of sign language that will
benefit the deaf community and speed up the process of
communication. The project's main objective is to build
a classifier-based software model for sign language
recognition. The strategy for this is to identify the
gestures and use classifiers to assess the attributes.
Principal component analysis is used for gesture
recognition, and a classifier is used to assess the gesture
features. The hand gesture has been used as a form of
communication since the beginning of time. Recognition
of hand gestures makes human-computer interaction
(HCI) more versatile and convenient. Because of this,
accurate character identification is crucial for a tranquil
and error-free HCI. The majority of the hand gesture
recognition (HGR) systems now in use have only taken a
few straightforward discriminating motions into account
for recognition performance, according to a literature
review. This study uses robust modelling of static signs
in the context of sign language recognition by using
convolutional neural networks (CNNs) based on deep
learning. In this study, CNN is used for HGR, which
takes into account both the ASL alphabet and numbers
simultaneously. The CNNs utilised for HGR are
emphasized, along with their benefits and drawbacks.
Modified Alex Net and modified VGG16 models for
classification form the foundation of the CNN
architecture. After feature extraction, a multiclass
support vector machine (SVM) classifier is built, which
is based on modified pre-trained VGG16 and Alex Net
architectures. To achieve the highest recognition
performance, the results are assessed using various layer
features. Both the leave-one-subject-out and a random
70-30 method of cross-validation were used to test the
accuracy of the HGR schemes. This work also
emphasises how easily each character can be recognised
and how similar their motions are to one another. To
show how affordable this work is, the experiments are
run on a basic CPU machine as opposed to cutting-edge
GPU hardware. The proposed system outperformed
several cutting-edge techniques with a recognition
accuracy of 99.82%
Keywords :
Sign language, ASL(American Sign Language) , Deaf Community , Gestures , Human Computer Interaction , Hand Gesture Recognition, CNN( Convolution Neuron Network), SVM(Support Vector Machine)
The deaf community communicates primarily
through the use of sign language. In general, sign
language is much more figuratively formable for
communication, which helps to advance and broaden the
conversation. The ASL is regarded as the universal sign
language, although there are numerous variations and
other sign systems used in various parts of the world.
There are fewer major ideas and concepts assigned.
There are fewer principal ideas and assigned
appearances in sign language. The main goal of this
effort is to create a system of sign language that will
benefit the deaf community and speed up the process of
communication. The project's main objective is to build
a classifier-based software model for sign language
recognition. The strategy for this is to identify the
gestures and use classifiers to assess the attributes.
Principal component analysis is used for gesture
recognition, and a classifier is used to assess the gesture
features. The hand gesture has been used as a form of
communication since the beginning of time. Recognition
of hand gestures makes human-computer interaction
(HCI) more versatile and convenient. Because of this,
accurate character identification is crucial for a tranquil
and error-free HCI. The majority of the hand gesture
recognition (HGR) systems now in use have only taken a
few straightforward discriminating motions into account
for recognition performance, according to a literature
review. This study uses robust modelling of static signs
in the context of sign language recognition by using
convolutional neural networks (CNNs) based on deep
learning. In this study, CNN is used for HGR, which
takes into account both the ASL alphabet and numbers
simultaneously. The CNNs utilised for HGR are
emphasized, along with their benefits and drawbacks.
Modified Alex Net and modified VGG16 models for
classification form the foundation of the CNN
architecture. After feature extraction, a multiclass
support vector machine (SVM) classifier is built, which
is based on modified pre-trained VGG16 and Alex Net
architectures. To achieve the highest recognition
performance, the results are assessed using various layer
features. Both the leave-one-subject-out and a random
70-30 method of cross-validation were used to test the
accuracy of the HGR schemes. This work also
emphasises how easily each character can be recognised
and how similar their motions are to one another. To
show how affordable this work is, the experiments are
run on a basic CPU machine as opposed to cutting-edge
GPU hardware. The proposed system outperformed
several cutting-edge techniques with a recognition
accuracy of 99.82%
Keywords :
Sign language, ASL(American Sign Language) , Deaf Community , Gestures , Human Computer Interaction , Hand Gesture Recognition, CNN( Convolution Neuron Network), SVM(Support Vector Machine)