loading page

Human Identification in Metaverse Using Egocentric Iris Recognition
  • Kuo Wang ,
  • Ajay Kumar
Ajay Kumar
Author Profile


In recent years, electronic glasses, including augmented reality (AR), virtual reality (VR), and mixed reality (MR) devices that connect the natural world and virtual world seamlessly, have significantly developed. Ocular images are inherently acquired during the immersion experiences from such devices, and can enable the verification of privileged identities during a live broadcast or meetings in virtual spaces. Lack of any such public database, and any specialized framework, is one of the key challenges in advancing iris recognition capability in metaverse or such virtual spaces. We introduce first or a new public iris images database, from 384 different subjects, to advance iris recognition using a generalized AR/VR device. Conventional iris recognition methods can only offer limited performance on such challenging iris images. This paper introduces an accurate and generalizable framework for iris recognition using AR/VR devices. The proposed framework is based on a convolutional network that uses a specifically designed shifted and extended quadlet loss function, enabling the network to accurately learn the discriminant iris features preserved in close-range and off-angle iris images. The framework introduced in this work can also adaptively consolidate the spatially corresponding features and abstract features from the other ocular details for more accurate matching. Thorough experimental results presented in this paper, using several classical and state-of-art iris recognition methods, are consistently outperforming and validate the effectiveness of the proposed approach with improvement of 96.30%, 30.58% and 27.23% for true accept rate (at false accept rate =0.0001), and 85.65%, 49.91% and 76.56% for equal error rate respectively.