loading page

Online Video Super-Resolution using Information Replenishing Unidirectional Recurrent Model
  • +2
  • Arbind Agrahari Baniya ,
  • Glory Lee ,
  • Peter Eklund ,
  • Sunil Aryal ,
  • Antonio Robles-Kelly
Arbind Agrahari Baniya
Deakin University, Deakin University

Corresponding Author:[email protected]

Author Profile
Glory Lee
Author Profile
Peter Eklund
Author Profile
Sunil Aryal
Author Profile
Antonio Robles-Kelly
Author Profile


Recurrent Neural Networks (RNN) are widespread for Video Super-Resolution (VSR) because of their proven ability to learn spatiotemporal inter-dependencies across the temporal dimension. Despite RNN’s ability to propagate memory across longer sequences of frames, vanishing gradient and error accumulation remain major obstacles to unidirectional RNNs in VSR. Several bi-directional recurrent models are suggested in the literature to alleviate this issue; however, these models are only applicable to offline use cases due to heavy demands for computational resources and the number of frames required per input. This paper proposes a novel unidirectional recurrent model for VSR, namely “Replenished Recurrency with Dual-Duct” (R2D2), that can be used in an online application setting. R2D2 incorporates a recurrent architecture with a sliding-window-based local alignment resulting in a recurrent hybrid architecture. It also uses a dual-duct residual network for concurrent and mutual refinement of local features along with global memory for full utilisation of the information available at each timestamp. With novel modelling and sophisticated optimisation, R2D2 demonstrates competitive performance and efficiency despite the lack of information available at each time-stamp compared to its offline (bi-directional) counterparts. Ablation analysis confirms the additive benefits of the proposed sub-components of R2D2 over baseline RNN models.The PyTorch-based code for the R2D2 model will be released at R2D2 GitRepo.
Aug 2023Published in Neurocomputing volume 546 on pages 126355. 10.1016/j.neucom.2023.126355