Sign Language Detection Using Machine Learning


Authors : Sahilee Misal; Ujwala Gaikwad

Volume/Issue : Volume 9 - 2024, Issue 8 - August

Google Scholar : https://shorturl.at/IfPyc

Scribd : https://shorturl.at/GNrK3

DOI : https://doi.org/10.38124/ijisrt/IJISRT24AUG347

Abstract : This paper investigates the application of machine learning for sign language detection. The objective is to develop a model that translates sign language into spoken language, bridging the communication gap between deaf and hearing individuals. You Only Look Once (YOLO), a deep learning object detection algorithm, is employed to train a model on a dataset of labeled sign language images derived from video data. The system achieves real-time sign detection in videos. However, challenges include the scarcity of large, labeled datasets and the inherent ambiguity of certain signs, which can lead to reduced detection accuracy. This research contributes to the field of Assistive Technologies (AT) by promoting accessibility and social inclusion for the deaf community.

Keywords : Sign Language Detection, Machine Learning, CNN, YOLO, Artificial Intelligence (AI), American Sign Language (ASL), Indian Sign Language (ISL).

References :

  1. The World Federation of the Deaf: https://wfdeaf.org/
  2. American Speech-Language-Hearing Association (ASHA): https://www.asha.org/
  3. A survey paper on Sign Language Recognition: Pilán, I., & Bustos, A. (2014, September). Sign language recognition: State of the art and future challenges https://www.researchgate.net/publication/262187093_Sign_language_recognition
  4. Deepsign: Sign Language Detection and Recognition Using Deep Learning: https://www.mdpi.com/2079-9292/11/11/1780
  5. Ghosh, S., & Munshi, S. (2012, March). Sign language recognition using support vector machine. In 2012 International Conference on Signal Processing, Computing and Communication (ICSPCC) (pp. 1-5). IEEE https://www.researchgate.net/publication/262233246_Sign_Language_Recognition_with_Support_Vector_Machines_and_Hidden_Conditional_Random_Fields_Going_from_Fingerspelling_to_Natural_Articulated_Words
  6. Vogler, C., & Metaxas, D. (2000). ASL recognition based on 3D hand posture and motion features. In Proceedings of the Fifth International Conference on automatic face and gesture recognition (pp. 129-134). IEEE https://www.sciencedirect.com/science/article/pii/S2214785321025888
  7. Ji, S., Xu, W., Yang, M., & Yu, X. (2010). 3D convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1), 221-231. http://ieeexplore.ieee.org/document/6165309/
  8. Pilán, I., & Bustos, A. (2014, September). Sign language recognition: State of the art and future challenges https://www.researchgate.net/publication/262187093_Sign_language_recognition_State_of_the_art
  9. Silvestre, J. D. C., & Lopes, H. (2015). Real-time visual sign language recognition using cnn architecture. Universal Access in the Information Society, 18(4), 825-841. https://www.researchgate.net/publication/364185120_Real-Time_Sign_Language_Detection_Using_CNN
  10. Alsharhan, M., Yassine, M., & Al-Alsharhan, A. (2014, December). Sign language gesture recognition using pca and neural networks. In 2014 International Conference on Frontiers in Artificial Intelligence and Applications (FIAIA) (pp. 260-265). IEEE
  11. Mittal, A., & Kumar, M. (2012, July). Vision based hand gesture recognition for sign language. In 2012 10th IEEE International Conference on Advanced Computing (ICoAC) (pp. 308-313). IEEE
  12. Ji, S., Xu, W., Yang, M., & Yu, X. (2010). 3D convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1), 221-231. [http://ieeexplore.ieee.org/document/6165309/]
  13. OpenCV (Open Source Computer Vision Library): https://opencv.org/
  14. YOLOv5 Model Training Documentation
  15. Ultralytics YOLO: https://github.com/ultralytics/yolov5
  16. Bergstra, J., Bardenet, R., Bengio, Y., & Kégl, B. (2012, July). Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb), 281-305.

This paper investigates the application of machine learning for sign language detection. The objective is to develop a model that translates sign language into spoken language, bridging the communication gap between deaf and hearing individuals. You Only Look Once (YOLO), a deep learning object detection algorithm, is employed to train a model on a dataset of labeled sign language images derived from video data. The system achieves real-time sign detection in videos. However, challenges include the scarcity of large, labeled datasets and the inherent ambiguity of certain signs, which can lead to reduced detection accuracy. This research contributes to the field of Assistive Technologies (AT) by promoting accessibility and social inclusion for the deaf community.

Keywords : Sign Language Detection, Machine Learning, CNN, YOLO, Artificial Intelligence (AI), American Sign Language (ASL), Indian Sign Language (ISL).

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe