Indian deaf and mute community is troubled by a lack of capable sign language volunteers and this lack is reflected in the relative isolation of the community. There have been many techniques used to apply machine learning and computer vision principles to recognize sign language inputs in real-time. CNN is identified as one of the more accurate of these techniques. This project aims to use CNN to classify and recognize sign language inputs produce text-to-speech outputs. In doing so, sign language speakers will be able to communicate to larger audiences without worrying about the speed a human interpreter. OpenCV is used for live detection of the hand gestures performed by the user. We are using Indian sign language to train the model. After recognition of a particular gesture, it will convert the predicted text to speech so that other person can hear what the user want to convey the message.
Keywords : Sign-language, OpenCV, Gestures, Convolutional Neural Networks(CNN).