Real-Time Object Detection and Augmented Reality to Support Low-Vision Navigation and Object Localization

Low-vision (LV) people often face difficulties in navigating complex environments and recognizing objects due to reduced visual acuity, contrast sensitivity, or field of view. In this paper, we present a prototype that combines real-time object detection with augmented reality (AR) visualization to enhance spatial awareness for LV users, supporting safe navigation and object localization in indoor spaces. The system integrates an RGB-D camera and a Microsoft HoloLens 2 headset via ROS and Unity, using a YOLOv11-based perception pipeline to detect and localize static and dynamic objects in 3D and display high-contrast AR icons above the detected objects within the user’s field of view, conveying both their position and identity.

This figure presents first-person PoV footage from a HoloLens 2 headset of an indoor office setup with a person, a table, a laptop, chairs, and a sofa. Viusal icons are anchored to these objects and the icons illustrate the objects they correspond to.
First-person PoV footage taken from the HoloLens 2 headset with visual icons anchored to the detected objects in the environment.

Publication and Video

  • (To appear in AlpCHI'26) Yong-Joon Thoo, Karim Aebischer, Nicolas Ruffieux, and Denis Lalanne. 2026. Real-Time Object Detection and Augmented Reality to Support Low-Vision Navigation and Object Localization: A Demonstration. [Video]