Rethinking multi-exposure image fusion with extreme and diverse exposure levels: A robust framework based on Fourier transform and contrastive learning
Linhao Qu*, Shaolei Liu*, ManningWang†, Zhijian Song†
Information Fusion (IF=17.564)
Multi-exposure image fusion (MEF) is an important technique for generating high dynamic range images. However, most existing MEF studies focus on fusing a moderately over-exposed image and a moderately under-exposed image, and they are not robust in fusing images with extreme and diverse exposure levels. In this paper, we propose a robust MEF framework based on Fourier transform and contrastive learning. Specifically, we develop a Fourier transform-based pixel intensity transfer strategy to synthesize images with diverse exposure levels from normally exposed natural images and train an encoder-decoder network to reconstruct the original natural image. In this way, the encoder and decoder can learn to extract features from images with diverse exposure levels and generate fused images with normal exposure. We propose a contrastive regularization loss to further enhance the capability of the network in recovering normal exposure levels. In addition, we construct an extreme MEF benchmark dataset and a random MEF benchmark dataset for a more comprehensive evaluation of MEF algorithms. We extensively compare our method with fifteen competitive traditional and deep learning-based MEF algorithms on three benchmark datasets, and our method outperforms the other methods in both subjective visual effects and objective evaluation metrics. Our code, datasets and all fused images will be released.