Sign Language Recognition System with Speech Output


Authors : Vinayak S. Kunder, Aakash A. Bhardwaj, Vipul D. Tank.

Volume/Issue : Volume 4 - 2019, Issue 3 - March

Google Scholar : https://goo.gl/DF9R4u

Scribd : https://goo.gl/WRH936

Thomson Reuters ResearcherID : https://goo.gl/KTXLC3

Indian deaf and mute community is troubled by a lack of capable sign language volunteers and this lack is reflected in the relative isolation of the community. There have been many techniques used to apply machine learning and computer vision principles to recognize sign language inputs in real-time. CNN is identified as one of the more accurate of these techniques. This project aims to use CNN to classify and recognize sign language inputs produce text-to-speech outputs. In doing so, sign language speakers will be able to communicate to larger audiences without worrying about the speed a human interpreter. OpenCV is used for live detection of the hand gestures performed by the user. We are using Indian sign language to train the model. After recognition of a particular gesture, it will convert the predicted text to speech so that other person can hear what the user want to convey the message.

Keywords : Sign-language, OpenCV, Gestures, Convolutional Neural Networks(CNN).

CALL FOR PAPERS


Paper Submission Last Date
30 - April - 2024

Paper Review Notification
In 1-2 Days

Paper Publishing
In 2-3 Days

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe