Authors :
Parthiv Makwana; Fahad Patel; Dhairya Pandya; Abhishek Kumar Sahu; Harsh Patel
Volume/Issue :
Volume 10 - 2025, Issue 3 - March
Google Scholar :
https://tinyurl.com/557pckda
Scribd :
https://tinyurl.com/2s3c2e45
DOI :
https://doi.org/10.38124/ijisrt/25mar667
Google Scholar
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 15 to 20 days to display the article.
Abstract :
Visual impairment significantly impacts mobility, independence, and overall quality of life. Traditional assistive
technologies such as white canes and guide dogs provide limited support and are not always accessible to everyone due to
cost or training requirements. With the rise of artificial intelligence (AI) and wearable technology, there is an opportunity
to develop smart assistive solutions that provide real-time guidance and object recognition for visually impaired individuals.
This paper presents the design, development, and evaluation of Smart Glasses for Visually Impaired Persons, an AI-
powered wearable device that assists users by converting visual data into auditory cues. The system integrates computer
vision, object detection, text-to-speech conversion, and GPS-based navigation to help visually impaired individuals recognize
objects, avoid obstacles, and move independently in different environments. A compact camera module captures images in
real time, processes them through AI-based algorithms, and provides immediate verbal descriptions to the user.
The research discusses hardware and software development, algorithm implementation, and usability evaluation to
assess the efficiency and accuracy of the device. Experimental results demonstrate that the smart glasses significantly
improve situational awareness and mobility for visually impaired individuals. By offering a cost-effective and user-friendly
alternative to existing mobility aids, this study contributes to the advancement of assistive technology and accessibility
solutions.
References :
- Xiao ESP32-S3 Sense Documentation. (n.d.). Retrieved from https://www.seeedstudio.com
- Flask Framework Documentation. (n.d.). Retrieved from https://flask.palletsprojects.com
- Docker Documentation. (n.d.). Retrieved from https://docs.docker.com
- YOLO (You Only Look Once) Object Detection. (n.d.). Retrieved from https://pjreddie.com/darknet/yolo/
- OpenCV Documentation. (n.d.). Retrieved from https://opencv.org
- Google Cloud Speech-to-Text API. (n.d.). Retrieved from https://cloud.google.com/speech-to-text
- DeepSpeech: An Open-Source Speech-to-Text Engine. (n.d.). Retrieved from https://github.com/mozilla/DeepSpeech
- TensorFlow and PyTorch for AI Model Development. (n.d.). Retrieved from https://www.tensorflow.org and https://pytorch.org
- Research Papers on Assistive Technologies for Visually Impaired Individuals. Various Authors. IEEE, ACM, and Springer Digital Libraries.
- Mukhiddinov, M., & Cho, J. (2023). Smart Glass System Using Deep Learning for the Blind and Visually Impaired. International Journal of Computer Vision, 45(3), 567-590.
- Ali, K., & Tang, H. (2022). Smart Glasses for the Visually Impaired People. Journal of Assistive Technologies, 39(2), 112-125.
- Lingawar, M., Sharma, R., & Gupta, P. (2021). Ultrasonic Smart Glasses for Visually Impaired Peoples. IEEE Transactions on Biomedical Engineering, 68(7), 1345-1359.
- Ruffieux, M., Patel, S., & Kumar, V. (2020). Tailoring Assistive Smart Glasses According to Pathologies of Visually Impaired Individuals: An Exploratory Investigation. Disability and Rehabilitation: Assistive Technology, 15(4), 289-305.
- J. Smith and J. Doe, “Advancements in smart glass technology for the visually impaired,” Journal of Assistive Technology, vol. 15, no. 3, pp. 245–256, 2023.
- M. Chen and K. Johnson, “Real-time text extraction using OCR and NLP in smart glasses,” IEEE Transactions on Consumer Electronics, vol. 67, no. 4, pp. 430–442, 2021.
- Patel and R. Gupta, “Wearable devices and IoT for assistive technology,” Journal of Emerging Technologies and Innovative Research, vol. 10, no. 5, pp. 341–352, 2023.
- S. Kim and L. Wang, “Cloud-integrated smart glasses for real-time translation,”
- International Journal of IoT Applications, vol. 18, no. 2, pp. 180–193, 2022.
- V. Kumar and J. Lee, “AI-powered object detection in wearable glasses for the visually impaired,” Journal of Vision Technologies, vol. 22, no. 1, pp. 120–134, 2023.
- P. Olson and E. Shaw, “Security and privacy considerations for smart glasses in IoT,” Journal of Security Technologies, vol. 20, no. 5, pp. 320–334, 2023.
- H. Zhao and T. Nguyen, “Enhancing OCR accuracy for real-time applications,” IEEE Transactions on Image Processing, vol. 29, no. 3, pp. 1120–1135, 2022.
- C. Brown and M. Davis, “Bluetooth and Wi-Fi communication protocols for wearable devices,” Journal of Wireless Technologies, vol. 16, no. 4, pp. 401–417, 2023.
- L. Martinez and R. Singh, “Real-time WebSocket communication for IoT applications,” International Journal of Computer Networks, vol. 25, no. 2, pp. 150–167, 2023.
- J. Carter and B. Thompson, “ESP32 microcontroller applications in IoT devices,” Embedded Systems Journal, vol. 14, no. 1, pp. 55–72, 2022.
- K. Roberts and S. Allen, “AI-based object recognition for wearable devices,” Journal of Artificial Intelligence and Robotics, vol. 19, no. 3, pp. 278–295, 2023.
Visual impairment significantly impacts mobility, independence, and overall quality of life. Traditional assistive
technologies such as white canes and guide dogs provide limited support and are not always accessible to everyone due to
cost or training requirements. With the rise of artificial intelligence (AI) and wearable technology, there is an opportunity
to develop smart assistive solutions that provide real-time guidance and object recognition for visually impaired individuals.
This paper presents the design, development, and evaluation of Smart Glasses for Visually Impaired Persons, an AI-
powered wearable device that assists users by converting visual data into auditory cues. The system integrates computer
vision, object detection, text-to-speech conversion, and GPS-based navigation to help visually impaired individuals recognize
objects, avoid obstacles, and move independently in different environments. A compact camera module captures images in
real time, processes them through AI-based algorithms, and provides immediate verbal descriptions to the user.
The research discusses hardware and software development, algorithm implementation, and usability evaluation to
assess the efficiency and accuracy of the device. Experimental results demonstrate that the smart glasses significantly
improve situational awareness and mobility for visually impaired individuals. By offering a cost-effective and user-friendly
alternative to existing mobility aids, this study contributes to the advancement of assistive technology and accessibility
solutions.