Authors :
Krishi Sehrawat; Md Adnan Ahsani; Anagh C. Nambiar; Durgesh Kumar
Volume/Issue :
Volume 10 - 2025, Issue 11 - November
Google Scholar :
https://tinyurl.com/2s49rdwu
Scribd :
https://tinyurl.com/4r62sk6n
DOI :
https://doi.org/10.38124/ijisrt/25nov1389
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Visually impaired individuals often encounter major difficulties in navigating independently, especially in new or unfamiliar
settings. Conventional aids like white canes and guide dogs provide only partial support and are heavily dependent on physical
infrastructure. This paper introduces VISNAV, an AI-driven Augmented Reality (AR) navigation system focused on enhancing movement
and safety for individuals with visual impairments. By combining computer vision, AR overlays, and multimodal feedback (audio, haptic,
and voice guidance), VISNAV facilitates real-time obstacle detection, route planning, and situational awareness. The system applies deep
learning models such as YOLOv8 and Mobile-Net SSD for accurate object recognition, integrated with AR-Kit/AR-Core for rendering
paths. Preliminary results indicate that VISNAV reduces reliance on external aids while delivering intuitive, scalable, and cost-effective
navigation solutions. This paper examines current navigation technologies, explains the design and methodology of VISNAV, assesses
performance in simulated environments, and highlights prospects for large-scale adoption.
Keywords :
Augmented Reality, Visually Impaired Navigation, Computer Vision, Assistive Technology, Deep Learning, AI-Based Mobility.
References :
- I. Tokmurziyev, A. Yerkebulan, et al., “LLM-Glasses: GenAI-driven Glasses with Haptic Feedback for Navigation of Visually Impaired People,” arXiv preprint arXiv:2503.16475, 2025.
- N. Pfitzer, M. Becker, et al., “MR.NAVI: Mixed-Reality Navigation Assistant for the Visually Impaired,” arXiv preprint arXiv:2506.05369, 2025.
- F. Zare, A. Kargar, et al., “A Wearable RFID-Based Navigation System for the Visually Impaired,” arXiv preprint arXiv:2303.14792, 2023.
- A. Jadhav, S. Patil, et al., “AI Guide Dog: Egocentric Path Prediction on Smartphone,” arXiv preprint arXiv:2501.07957, 2025.
- J. Zhang, et al., “A Wearable Visually Impaired Assistive System Based on Semantic Vision SLAM for Grasping Operation,” Sensors, vol. 24, no. 11, pp. 3593, 2024.
- M. Bamdad, et al., “SLAM for Visually Impaired People: a Survey,” arXiv preprint arXiv:2212.04745, 2022, revised 2024.
- D. Santiago, et al., “A Multi-Sensory Guidance System for the Visually Impaired Using YOLO and ORB-SLAM,” Information, vol. 13, no. 7, pp. 343, 2022.
- K. Shi, “Augmented Audio Reality: Bridging Mobility Gaps for the Visually Impaired,” Journal of Electronics and Information Science, 2025.
- L. Chen, et al., “Visual Impairment Spatial Awareness System for Indoor Navigation and Daily Activities,” Journal of Imaging, vol. 11, no. 1, pp. 9, 2024.
- P. Singh, et al., “Obstacle Detection System for Navigation Assistance of Visually Impaired People Based on Deep Learning Techniques,” Sensors, vol. 23, no. 11, pp. 5262, 2023.
- H. Wang, et al., “Technological Advancements in Human Navigation for the Visually Impaired: A Systematic Review,” Sensors, vol. 25, no. 7, pp. 2213, 2025.
- J. Martinez, et al., “A Navigation and Augmented Reality System for Visually Impaired People,” Sensors, vol. 21, no. 9, pp. 3061, 2021.
- Be My Eyes, “Be My AI,” Wikipedia entry, 2023. [Online]. Available: https://en.wikipedia.org/wiki/Be_My_Eyes
- JAMA Ophthalmology Research Group, “New Device Could Help Visually Impaired Avoid Obstacles,” JAMA Ophthalmology, 2021.
- R. Gupta, et al., “Deep Learning-Based Assistive Navigation for Visually Impaired Using YOLOv5,” IEEE Access, vol. 10, pp. 122344–122356, 2022.
- S. Park, et al., “AR Glasses for Visually Impaired Navigation: A Low-Cost Prototype,” IEEE Transactions on Human-Machine Systems, vol. 53, no. 2, pp. 145–158, 2023.
- A. Ali, et al., “Multimodal Feedback for Navigation Assistance: A Comparative Study of Haptic and Audio Cues,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, no. 8, pp. 1772–1784, 2022.
- Y. Kim, et al., “Hybrid Indoor Localization for the Visually Impaired Using SLAM and Wi-Fi Fingerprinting,” IEEE Sensors Journal, vol. 23, no. 4, pp. 5678–5689, 2023.
- R. Mohan, et al., “Real-Time Object Detection for Assistive Systems Using YOLOv8,” Applied Sciences, vol. 13, no. 15, pp. 9120, 2023.
- L. Rossi, et al., “ARCore-Based Navigation System for Visually Impaired Pedestrians,” Journal of Ambient Intelligence and Humanized Computing, vol. 15, no. 6, pp. 3333–3345, 2024.
- T. Nguyen, et al., “Voice-Based Interaction for Navigation Support in Smart Cities,” IEEE Internet of Things Journal, vol. 11, no. 3, pp. 5050–5062, 2024.
- K. Patel, et al., “Lightweight Deep Learning Models for On-Device Navigation Assistance,” Pattern Recognition Letters, vol. 169, pp. 30–38, 2023.
- P. Banerjee, et al., “Smart Cane with IoT Integration for Navigation of the Visually Impaired,” IEEE Access, vol. 12, pp. 78945–78958, 2024.
Visually impaired individuals often encounter major difficulties in navigating independently, especially in new or unfamiliar
settings. Conventional aids like white canes and guide dogs provide only partial support and are heavily dependent on physical
infrastructure. This paper introduces VISNAV, an AI-driven Augmented Reality (AR) navigation system focused on enhancing movement
and safety for individuals with visual impairments. By combining computer vision, AR overlays, and multimodal feedback (audio, haptic,
and voice guidance), VISNAV facilitates real-time obstacle detection, route planning, and situational awareness. The system applies deep
learning models such as YOLOv8 and Mobile-Net SSD for accurate object recognition, integrated with AR-Kit/AR-Core for rendering
paths. Preliminary results indicate that VISNAV reduces reliance on external aids while delivering intuitive, scalable, and cost-effective
navigation solutions. This paper examines current navigation technologies, explains the design and methodology of VISNAV, assesses
performance in simulated environments, and highlights prospects for large-scale adoption.
Keywords :
Augmented Reality, Visually Impaired Navigation, Computer Vision, Assistive Technology, Deep Learning, AI-Based Mobility.