Pan-sharpening by residual network with dense convolution for remote sensing images
- Vol. 25, Issue 6, Pages: 1270-1283(2021)
Published: 07 June 2021
DOI: 10.11834/jrs.20219411
扫 描 看 全 文
浏览全部资源
扫码关注微信
Published: 07 June 2021 ,
扫 描 看 全 文
陈毛毛,郭擎,刘明亮,李安.2021.密集卷积残差网络的遥感图像融合.遥感学报,25(6): 1270-1283
Chen M M,Guo Q,Liu M L and Li A. 2021. Pan-sharpening by residual network with dense convolution for remote sensing images. National Remote Sensing Bulletin, 25(6):1270-1283
针对传统的遥感图像融合方法通常会引起光谱失真的问题和大多数基于深度学习的融合方法忽略充分利用每个卷积层信息的不足,本文结合密集连接卷积网络和残差网络的特性,提出了一个新的融合网络。该网络通过建立多个密集卷积块来充分利用卷积层的分级特征,同时块与块之间通过过渡层加快信息流动,从而最大程度地对特征进行极致利用并提取到丰富的特征。该网络应用残差学习拟合深层特征与浅层特征之间的残差,加快网络的收敛速度。实验中利用GaoFen-1(GF-1)和WorldView-2/3(WV-2/3)的多光谱图像MS (Multispectral Image)和全色图像PAN(Panchromatic Image)(MS与PAN的空间分辨率之比为4)评估本文提出方法的有效性。从视觉效果和定量评估结果两个方面来看,本文方法得到的融合结果要优于所对比的传统方法和深度学习方法,并且该网络具有鲁棒性,能够泛化到不需要预训练的其他卫星图像。本文方法通过特征的重复利用实现了光谱信息的高保真并提高了空间细节分辨能力,有利于遥感图像的应用研究。
Pan-sharpening (also known as remote sensing image fusion) aims to generate Multi-Spectral (MS) images with high spatial resolution and high spectral resolution by fusing high spatial resolution panchromatic (PAN) images and high spectral resolution MS images with low spatial resolution. Traditional pan-sharpening methods mainly include the component substitution method
the multiresolution analysis method
and the model-based optimization method. These fusion methods involve linear models
which are difficult to use in achieving the appropriate trade-off between spatial improvement and spectral preservation. In addition
they often introduce spectral or spatial distortion. Recently
many fusion methods based on deep learning have been proposed. However
their network depth is relatively shallow
and detailed information is inevitably lost during feature transfer. Hence
we propose a deep residual network with dense convolution for pan-sharpening.
As the network becomes deep
the features of different levels become complementary to one other. However
most fusion methods based on deep learning ignore making full use of the information of each convolution layer. The densely connected convolutional network allows the features of all previous layers to be used as input for each layer in one densely connected block. To fully utilize the features learned from all convolution layers
we establish the multiple densely convolutional blocks to reuse features. Moreover
the information flow is accelerated by the transition layer between every two blocks. These maximize the use of features and extract rich features. Given the great correlation between deep features and shallow features
residual learning is used to supervise the densely convolutional structure to learn the difference between them
that is
residual features. Thus
residual learning combines shallow features and residual features to obtain further advanced information from MS and PAN images
which prepares for obtaining fused images with high spatial and spectral resolution.
To evaluate the effectiveness of the proposed method
we conduct simulated and real-image experiments on the 4-band GaoFen-1 data and 8-band WorldView-2 data with multiple land types. The trained network is generalized well to WorldView-3 images without pre-training. The visual and the quantitative assessment results show that the high-resolution fused images obtained by using the proposed method are superior to the results produced by the traditional and deep learning methods. The proposed approach achieves high spectral fidelity and enhances spatial details by reusing features.
The proposed method makes comprehensive use of the advantages of densely convolutional blocks and residual learning. In the feature extraction stage
different levels of features are connected in series through the densely convolutional blocks. This characteristic makes the transmission of features and gradients effective in alleviating the gradient disappearance problem and provides rich spatial and spectral feature for fusion results. In the feature fusion stage
residual learning is used to learn the difference between deep features and shallow features
that is
residual feature. Hence
the convergence speed of the network is accelerated. The experiment result shows that our network has good fusion and generalization abilities.
遥感图像融合深度学习密集连接卷积网络密集卷积块残差学习
pan-sharpeningremote sensing image fusiondeep learningdensely connected convolutional networkdensely convolutional blocksresidual learning
Aharon M, Elad M and Bruckstein A. 2006. K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54(11): 4311-4322 [DOI: 10.1109/TSP.2006.881199http://dx.doi.org/10.1109/TSP.2006.881199]
Aiazzi B, Alparone L, Baronti S, Garzelli A and Selva M. 2006. MTF-tailored multiscale fusion of high-resolution MS and pan imagery. Photogrammetric Engineering and Remote Sensing, 72(5): 591-596 [DOI: 10.14358/pers.72.5.591http://dx.doi.org/10.14358/pers.72.5.591]
Alparone L, Wald L, Chanussot J, Thomas C, Gamba P and Bruce L M. 2007. Comparison of pansharpening algorithms: outcome of the 2006 GRS-S data-fusion contest. IEEE Transactions on Geoscience and Remote Sensing, 45(10): 3012-3021 [DOI: 10.1109/tgrs.2007.904923http://dx.doi.org/10.1109/tgrs.2007.904923]
Cetin M and Musaoglu N. 2009. Merging hyperspectral and panchromatic image data: qualitative and quantitative analysis. International Journal of Remote Sensing, 30(7): 1779-1804 [DOI: 10.1080/01431160802639525http://dx.doi.org/10.1080/01431160802639525]
Chen Q J, Li Y and Chai Y Z. 2018. Remote sensing image fusion based on deep learning non-subsampled shearlet. Journal of Applied Optics, 39(5): 655-666
陈清江, 李毅, 柴昱洲. 2018. 结合深度学习的非下采样剪切波遥感图像融合. 应用光学, 39(5): 655-666 [DOI: 10.5768/JAO201839.0502001http://dx.doi.org/10.5768/JAO201839.0502001]
Cheng J, Liu H J, Liu T, Wang F and Li H S. 2015. Remote sensing image fusion via wavelet transform and sparse representation. ISPRS Journal of Photogrammetry and Remote Sensing, 104: 158-173 [DOI:10.1016/j.isprsjprs.2015.02.015http://dx.doi.org/10.1016/j.isprsjprs.2015.02.015]
Choi J, Yu K Y and Kim Y. 2011. A new adaptive component-substitution-based satellite image fusion by using partial replacement. IEEE Transactions on Geoscience and Remote Sensing, 49(1): 295-309 [DOI: 10.1109/TGRS.2010.2051674http://dx.doi.org/10.1109/TGRS.2010.2051674]
Garzelli A, Nencini F and Capobianco L. 2008. Optimal MMSE pan sharpening of very high resolution multispectral images. IEEE Transactions on Geoscience and Remote Sensing, 46(1): 228-236 [DOI: 10.1109/tgrs.2007.907604http://dx.doi.org/10.1109/tgrs.2007.907604]
Gillespie A R, Kahle A B and Walker R E. 1987. Color enhancement of highly correlated images. II. Channel ratio and “chromaticity” transformation techniques. Remote Sensing of Environment, 22(3): 343-365 [DOI: 10.1016/0034-4257(87)90088-5http://dx.doi.org/10.1016/0034-4257(87)90088-5]
He K M and Sun J. 2015. Convolutional neural networks at constrained time cost//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, MA, USA: IEEE: 5353-5360 [DOI: 10.1109/CVPR.2015.7299173http://dx.doi.org/10.1109/CVPR.2015.7299173]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE: 770-778 [DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Huang G, Liu Z, Van Der Maaten L and Weinberger K Q. 2017. Densely connected convolutional networks//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE: 2261-2269 [DOI: 10.1109/CVPR.2017.243http://dx.doi.org/10.1109/CVPR.2017.243]
Jia Y Q, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S and Darrell T. 2014. Caffe: convolutional architecture for fast feature embedding//Proceedings of the 22nd ACM International Conference on Multimedia. Orlando, Florida, USA: Association for Computing Machinery: 675-678 [DOI: 10.1145/2647868.2654889http://dx.doi.org/10.1145/2647868.2654889]
Jiang C, Zhang H Y, Shen H F and Zhang L P. 2014. Two-step sparse coding for the pan-sharpening of remote sensing images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(5): 1792-1805 [DOI: 10.1109/JSTARS.2013.2283236http://dx.doi.org/10.1109/JSTARS.2013.2283236]
Kuang P, Ma T S, Chen Z W and Li F. 2018. Image super-resolution with densely connected convolutional networks. Applied Intelligence, 49(1): 125-136 [DOI: 10.1007/s10489-018-1234-yhttp://dx.doi.org/10.1007/s10489-018-1234-y]
Laben C A and Brower B V. 2000. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. U.S., No. 6,011,875
Li N, Huang N and Xiao L. 2017. PAN-Sharpening via residual deep learning//Proceedings of 2017 IEEE International Geoscience and Remote Sensing Symposium. Fort Worth, Texas, USA: IEEE: 5133-5136 [DOI: 10.1109/IGARSS.2017.8128158http://dx.doi.org/10.1109/IGARSS.2017.8128158]
Loncan L, de Almeida L B, Bioucas-Dias J M, Briottet X, Chanussot J, Dobigeon N, Fabre S, Liao W Z, Licciardi G A, Simões M, Tourneret J Y, Veganzones M A, Vivone G, Wei Q and Yokoya N. 2015. Hyperspectral pansharpening: a review. IEEE Geoscience and Remote Sensing Magazine, 3(3): 27-46 [DOI: 10.1109/MGRS.2015.2440094http://dx.doi.org/10.1109/MGRS.2015.2440094]
Masi G, Cozzolino D, Verdoliva L and Scarpa G. 2016. Pansharpening by convolutional neural networks. Remote Sensing, 8(7): 594 [DOI: 10.3390/rs8070594http://dx.doi.org/10.3390/rs8070594]
Otazu X, Gonzalez-Audicana M, Fors O and Nunez J. 2005. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Transactions on Geoscience and Remote Sensing, 43(10): 2376-2385 [DOI: 10.1109/tgrs.2005.856106http://dx.doi.org/10.1109/tgrs.2005.856106]
Tong T, Li G, Liu X J and Gao Q Q. 2017. Image super-resolution using dense skip connections//Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice: IEEE: 4799-4807 [DOI: 10.1109/ICCV.2017.514http://dx.doi.org/10.1109/ICCV.2017.514]
Tu T M, Huang P S, Hung C L and Chang C P. 2004. A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geoscience and Remote Sensing Letters, 1(4): 309-312 [DOI: 10.1109/LGRS.2004.834804http://dx.doi.org/10.1109/LGRS.2004.834804]
Vivone G, Alparone L, Chanussot J, Mura M D, Garzelli A, Licciardi G A, Restaino R and Wald L. 2015. A critical comparison among pansharpening algorithms. IEEE Transactions on Geoscience and Remote Sensing, 53(5): 2565-2586 [DOI: 10.1109/TGRS.2014.2361734http://dx.doi.org/10.1109/TGRS.2014.2361734]
Wald L, Ranchin T and Mangolini M. 1997. Fusion of satellite images of different spatial resolutions: assessing the quality of resulting images. Photogrammetric Engineering and Remote Sensing, 63(6): 691-699
Wang F and Cheng Y M. 2017. Visible and infrared image enhanced fusion based on MSSTO and NSCT transform. Control and Decision, 32(2): 269-274
王峰, 程咏梅. 2017. 基于MSSTO与NSCT变换的可见光与红外图像增强融合. 控制与决策, 32(2): 269-274 [DOI: 10.13195/j.kzyjc.2015.1406http://dx.doi.org/10.13195/j.kzyjc.2015.1406]
Wei Y C, Yuan Q Q, Shen H F and Zhang L P. 2017. Boosting the accuracy of multispectral image pansharpening by learning a deep residual network. IEEE Geoscience and Remote Sensing Letters, 14(10): 1795-1799 [DOI: 10.1109/LGRS.2017.2736020http://dx.doi.org/10.1109/LGRS.2017.2736020]
Wen R, Fu K, Sun H, Sun X and Wang L. 2018. Image superresolution using densely connected residual networks. IEEE Signal Processing Letters, 25(10): 1565-1569 [DOI: 10.1109/LSP.2018.2861989http://dx.doi.org/10.1109/LSP.2018.2861989]
Xu J, Chae Y, Stenger B and Datta A. 2018. Dense bynet: residual dense network for image super resolution//Proceedings of the 25th IEEE International Conference on Image Processing (ICIP). Athens: IEEE: 4809-4817 [DOI: 10.1109/ICIP.2018.8451696http://dx.doi.org/10.1109/ICIP.2018.8451696]
Zhang Y L, Tian Y P, Kong Y, Zhong B N and Fu Y. 2018. Residual dense network for image super-resolution//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE: 2472-2481 [DOI: 10.1109/CVPR.2018.00262http://dx.doi.org/10.1109/CVPR.2018.00262]
相关文章
相关作者
相关机构