- 4.3.1 自适应混合模块(Adaptive Blend Module)
在图像编辑领域,混合图层(blend layer)常被用于与图像(base layer)以不同的模式混合以实现各种各样的图像编辑任务,比如对比度的增强,加深、减淡操作等。通常地,给定一张图片,以及一个混合图层,我们可以将两个图层进行混合得到图像编辑结果,如下:
其中 f 是一个固定的逐像素映射函数,通常由混合模式所决定。受限于转化能力,一个特定的混合模式及固定的函数 f 难以直接应用于种类多样的编辑任务中去。为了更好的适应数据的分布以及不同任务的转换模式,我们借鉴了图像编辑中常用的柔光模式,设计了一个自适应混合模块 (ABM),如下:
其中表示 Hadmard product,和为可学习的参数,被网络中所有的 ABM 模块以及接下来的 R-ABM 模块所共享,表示所有值为 1 的常数矩阵。
- 4.3.2 逆向自适应混合模块(Reverse Adaptive Blend Module)
实际上,ABM 模块是基于混合图层 B 已经获得的前提假设。然而,我们在 LRL 中只获得了低分辨率的结果,为了得到混合图层 B,我们对公式 3 进行求解,构建了一个逆向自适应混合模块 (R-ABM),如下:
总的来说,通过利用混合图层作为中间媒介,ABM 模块和 R-ABM 模块实现了图像 I 和结果 R 之间的自适应转换,相比于直接对低分辨率结果利用卷积上采样等操作进行向上拓展(如 Pix2PixHD),我们利用混合图层来实现这个目标,有其两方面的优势:1)在局部修饰任务中,混合图层主要记录了两张图像之间的局部转换信息,这意味着其包含更少的无关信息,且更容易由一个轻量的网络进行优化。2)混合图层直接作用于原始图像来实现最后的修饰,可以充分利用图像本身的信息,进而实现高度的细节保真。
实际上,关于自适应混合模块有许多可供选择的函数或者策略,我们在论文中对设计的动机以及其他方案的对比进行了详细介绍,这里不进行更多的阐述了,Figure 7 展示了我们的方法和其他混合方法的消融对比。
4.3.3 Refining Module
4.4 损失函数
实验结果
5.1 与 SOTA 方法对比
5.2 消融实验
5.3 运行速度与内存消耗
效果展示
美肤效果展示:
原图像来自 unsplash [31]
原图像来自人脸数据集 FFHQ [32]
原图像来自人脸数据集 FFHQ [32]
可以看到,相较于传统的美颜算法,我们提出的局部修图框架在去除皮肤瑕疵的同时,充分的保留了皮肤的纹理和质感,实现了精细、智能化的肤质优化。进一步,我们将该方法拓展到服饰去皱领域,也实现了不错的效果,如下:
参考文献:
[1] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
[2]Qifeng Chen and Vladlen Koltun. Photographic image synthesis with cascaded refinement networks. In ICCV, 2017.
[3]Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
[4]Ji Lin, Richard Zhang, Frieder Ganz, Song Han, and Jun-Yan Zhu. Anycost gans for interactive image synthesis and editing. In CVPR, 2021.
[5]Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive normalization. In CVPR, 2019.
[6]Kyungjune Baek, Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Hyunjung Shim. Rethinking the truly unsupervised image-to-image translation. In ICCV, 2021.
[7]Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018.
[8]Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. pages 8188–8197, 2020.
[9]Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017.
[10]Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016.
[11]Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016.
[12]Artsiom Sanakoyeu, Dmytro Kotovenko, Sabine Lang, and Bjorn Ommer. A style-aware content loss for real-time hd style transfer. In ECCV, 2018.
[13]Jianrui Cai, Shuhang Gu, and Lei Zhang. Learning a deep single image contrast enhancer from multi-exposure images. TIP, 2018.
[14]Yubin Deng, Chen Change Loy, and Xiaoou Tang. Aestheticdriven image enhancement by adversarial learning. In ACM MM, 2018.
[15]Micha¨el Gharbi, Jiawen Chen, Jonathan T Barron, SamuelW Hasinoff, and Fr´edo Durand. Deep bilateral learning for realtime image enhancement. TOG, 2017.
[16]Jingwen He, Yihao Liu, Yu Qiao, and Chao Dong. Conditional sequential modulation for efficient global image retouching. In ECCV, 2020.
[17]Xiefan Guo, Hongyu Yang, and Di Huang. Image inpainting via conditional texture and structure dual generation. In ICCV, 2021.
[18]Jingyuan Li, Ning Wang, Lefei Zhang, Bo Du, and Dacheng Tao. Recurrent feature reasoning for image inpainting. In CVPR, 2020.
[19]Liang Liao, Jing Xiao, Zheng Wang, Chia-Wen Lin, and Shin’ichi Satoh. Image inpainting guided by coherence priors of semantics and textures. In CVPR, 2021.
[20]Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In ECCV, 2018.
[21]Nian Cai, Zhenghang Su, Zhineng Lin, Han Wang, Zhijing Yang, and Bingo Wing-Kuen Ling. Blind inpainting using the fully convolutional neural network. The Visual Computer, 2017.
[22]Yang Liu, Jinshan Pan, and Zhixun Su. Deep blind image inpainting. In IScIDE, 2019.
[23]YiWang, Ying-Cong Chen, Xin Tao, and Jiaya Jia. Vcnet: A robust approach to blind image inpainting. In ECCV, 2020.
[24]Jie Liang, Hui Zeng, and Lei Zhang. High-resolution photorealistic image translation in real-time: A laplacian pyramid translation network. In CVPR, 2021.
[25]Tamar Rott Shaham, Micha¨el Gharbi, Richard Zhang, Eli Shechtman, and Tomer Michaeli. Spatially-adaptive pixelwise networks for fast image translation. In CVPR, 2021.
[26]Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In CVPR, 2018.
[27]ABPN: Adaptive Blend Pyramid Network for Real-Time Local Retouching of Ultra High-Resolution Photo, CVPR2022
[28]Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016.
[29]Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
[30]Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 3DV, 2016.
[31]https://unsplash.com/
[32]https://github.com/NVlabs/ffhq-dataset