Toward an objective benchmark for video completion
Video-completion methods aim to complete selected regions of a video sequence in a natural looking manner with little to no additional user interaction. Numerous algorithms were proposed to solve this problem; however, a unified benchmark to quantify the progress in the field is still lacking. Video-completion results are usually judged by their plausibility and aren’t expected to adhere to one ground-truth result, which complicates measuring the video-completion performance. In this paper, we address this problem by proposing a set of full-reference quality metrics that outperform naïve approaches and an online benchmark for video-completion algorithms. We construct seven test sequences with ground-truth video-completion results by composing various foreground objects over a set of background videos. Using this dataset, we conduct an extensive comparative study of video-completion perceptual quality involving six algorithms and over 300 human participants. Finally, we show that by relaxing the requirement of complete adherence to ground truth and by taking into account temporal consistency we can increase the correlation of objective quality metrics with perceptual completion quality on the proposed dataset.
Publisher URL: https://link.springer.com/article/10.1007/s11760-018-1387-5