Julia Angellica D. Zafra
1 Publications
Scopus ID: 105000426847
TENCON 2024 - 2024 IEEE Region 10 Conference (TENCON), (2024), pp. 842-845
Conference Paper | Published: January 1, 2024
Abstract
Visual impairment continues to be a global health concern. People with visual impairment experience difficulty moving around indoors, especially in unfamiliar spaces. While existing assistive technologies like smart canes offer point-to-point navigation or rely on infrastructure like RFID tags or beacons, they lack the ability to provide comprehensive indoor navigation with obstacle detection and avoidance. This paper presents a novel indoor navigation system for visually impaired individuals using deep learning and audio guidance. The system utilizes 3D-printed glasses equipped with a Raspberry Pi v2 camera, audio user interface and a processing unit comprising a Raspberry Pi 4B and Google Coral Edge tensor processing unit (TPU). As validated in a controlled indoor environment, the deep learning models for localization, navigation, obstacle detection, and obstacle avoidance achieve high results in terms of accuracy, precision recall, and F1-score. Based on user tests using the System Usability Scale, this wearable assistive device appears to offer a promising solution for promoting independent navigation and spatial awareness among visually impaired individuals.