Zhanat Makhataeva

and 2 more

Augmented reality (AR) offers novel ways to design, curate, and deliver information to users by integrating virtual, computer-generated objects into a real-world environment. This work presents an AR-based human memory augmentation system that uses computer vision (CV) and artificial intelligence (AI) to replace the internal mental representation of objects in the environment with an external augmented representation. The system consists of two components: (1) an AR headset and (2) a computing station. The AR headset runs an application that senses the indoor environment, sends data to the computing station for processing, receives the processed data, and updates the external representation of objects using a virtual 3D object projected into the real environment in front of the user’s eyes. The computing station performs computer vision-based indoor environment self-localization, object detection, and object-to-location binding using first-person view (FPV) data received from the AR headset. We designed a behavioral study to evaluate the usability of the system. In a pilot study with 26 participants (12 females and 14 males), we investigated human performance in an experimental task that involved remembering the positions of objects in a physical space and displaying the positions of the learned objects on the two-dimensional (2D) map of the space. We conducted the studies under two conditions—that is, with and without using the AR system. We investigated the usability of the system, subjective workload, and performance variables under both conditions. The results showed that the AR-based augmentation of the mental representation of objects indoors reduced cognitive load and increased performance accuracy.

Askat Kuzdeuov

and 2 more