loading page

Omnidirectional Video Super-Resolution using Deep Learning
  • +1
  • Arbind Agrahari Baniya ,
  • Glory Lee ,
  • Peter Eklund ,
  • Sunil Aryal
Arbind Agrahari Baniya
Deakin University, Deakin University

Corresponding Author:[email protected]

Author Profile
Glory Lee
Author Profile
Peter Eklund
Author Profile
Sunil Aryal
Author Profile

Abstract

Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution; however, these techniques do not tackle the distortion present in equirectangular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with a Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with an attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose-built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural subcomponents, targeted training and optimisation.
2024Published in IEEE Transactions on Multimedia volume 26 on pages 540-554. 10.1109/TMM.2023.3267294