S3PO.pdf (4.32 MB)
Download file

Omnidirectional Video Super-Resolution using Deep Learning

Download (4.32 MB)
posted on 2023-04-11, 16:20 authored by Arbind Agrahari BaniyaArbind Agrahari Baniya, Glory LeeGlory Lee, Peter EklundPeter Eklund, Sunil AryalSunil Aryal

Omnidirectional Videos (or 360° videos) are widely used in Virtual Reality (VR) to facilitate immersive and interactive viewing experiences. However, the limited spatial resolution in 360° videos does not allow for each degree of view to be represented with adequate pixels, limiting the visual quality offered in the immersive experience. Deep learning Video Super-Resolution (VSR) techniques used for conventional videos could provide a promising software-based solution; however, these techniques do not tackle the distortion present in equirectangular projections of 360° video signals. An additional obstacle is the limited 360° video datasets to study. To address these issues, this paper creates a novel 360° Video Dataset (360VDS) with a study of the extensibility of conventional VSR models to 360° videos. This paper further proposes a novel deep learning model for 360° Video Super-Resolution (360° VSR), called Spherical Signal Super-resolution with a Proportioned Optimisation (S3PO). S3PO adopts recurrent modelling with an attention mechanism, unbound from conventional VSR techniques like alignment. With a purpose-built feature extractor and a novel loss function addressing spherical distortion, S3PO outperforms most state-of-the-art conventional VSR models and 360° specific super-resolution models on 360° video datasets. A step-wise ablation study is presented to understand and demonstrate the impact of the chosen architectural subcomponents, targeted training and optimisation.


Email Address of Submitting Author

ORCID of Submitting Author


Submitting Author's Institution

Deakin University

Submitting Author's Country

  • Australia