Bokeh_Effect_Rendering_with_Vision_Transformers.pdf (9.58 MB)
Download fileBokeh Effect Rendering with Vision Transformers
preprint
posted on 2022-01-07, 22:15 authored by Hariharan NagasubramaniamHariharan Nagasubramaniam, Rabih YounesBokeh effect is growing to be an important feature
in photography, essentially to choose an object of interest to
be in focus with the rest of the background being blurred.
While naturally rendering this effect requires a DSLR with large diameter of aperture, with the current advancements in Deep
Learning, this effect can also be produced in mobile cameras.
Most of the existing methods use Convolutional Neural Networks
while some relying on the depth map to render this effect. In
this paper, we propose an end-to-end Vision Transformer model
for Bokeh rendering of images from monocular camera. This
architecture uses vision transformers as backbone, thus learning
from the entire image rather than just the parts from the filters
in a CNN. This property of retaining global information coupled
with initial training of the model for image restoration before
training to render the blur effect for the background, allows our
method to produce clearer images and outperform the current
state-of-the-art models on the EBB! Data set. The code to our
proposed method can be found at: https://github.com/Soester10/
Bokeh-Rendering-with-Vision-Transformers.
History
Email Address of Submitting Author
hnnhariharan12@gmail.comORCID of Submitting Author
0000-0001-5919-6296Submitting Author's Institution
SRM Institute of Science and TechnologySubmitting Author's Country
- India