Sign Language Interpreter Using Computer Vision and LeNet-5 Convolutional Neural Network Architecture


Authors : Shreya Vishwanath; Shaik Sohail Yawer

Volume/Issue : Volume 6 - 2021, Issue 5 - May

Google Scholar : http://bitly.ws/9nMw

Scribd : https://bit.ly/3gc1y7o

A gesture is a form of sign language that incorporates the movement of the hands or face to indicate an idea, opinion, or emotion. Sign language is a way for deaf and mute persons to communicate with others by using gestures. Deaf and mute persons are familiar with sign language since it is widely used in their community, while the general public is less familiar. Hand gestures have been increasingly popular because they let deaf and mute people communicate with others. Many of these forms of communication, however, are still limited to specialized applications and costly hardware. As a result, we look at a simpler technique that uses fewer resources, such as a personal computer with a web camera that accomplishes our goal. The gestures are captured as images through a webcam and image processing is done to extract the hand shape. The interpretation of images is carried out using a LeNet-5 Convolutional Neural Network architecture.

Keywords : Gesture; Image Processing; Convolutional Neural Network; Numbers; Digits; OpenCV; LeNet-5; Parameters.

CALL FOR PAPERS


Paper Submission Last Date
30 - April - 2024

Paper Review Notification
In 1-2 Days

Paper Publishing
In 2-3 Days

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe