High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis (bibtex)
by Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, Hao Li
Abstract:
Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, hey can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-theart inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images.
Reference:
High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis (Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, Hao Li), In arXiv preprint arXiv:1611.09969v2, 2017.
Bibtex Entry:
@article{yang_high-resolution_2017,
	title = {High-{Resolution} {Image} {Inpainting} using {Multi}-{Scale} {Neural} {Patch} {Synthesis}},
	url = {https://arxiv.org/abs/1611.09969},
	abstract = {Recent advances in deep learning have shown exciting promise in filling large holes in natural images with semantically plausible and context aware details, impacting fundamental image manipulation tasks such as object removal. While these learning-based methods are significantly more effective in capturing high-level features than prior techniques, hey can only handle very low-resolution inputs due to memory limitations and difficulty in training. Even for slightly larger images, the inpainted regions would appear blurry and unpleasant boundaries become visible. We propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. We evaluate our method on the ImageNet and Paris Streetview datasets and achieved state-of-theart inpainting accuracy. We show our approach produces sharper and more coherent results than prior methods, especially for high-resolution images.},
	journal = {arXiv preprint arXiv:1611.09969v2},
	author = {Yang, Chao and Lu, Xin and Lin, Zhe and Shechtman, Eli and Wang, Oliver and Li, Hao},
	month = apr,
	year = {2017},
	keywords = {Graphics, UARC}
}
Powered by bibtexbrowser