SAR ship detection through generative knowledge transfer
- Vol. 28, Issue 2, Pages: 470-480(2024)
Published: 07 February 2024
DOI: 10.11834/jrs.20211354
扫 描 看 全 文
浏览全部资源
扫码关注微信
Published: 07 February 2024 ,
扫 描 看 全 文
娄欣,王晗,卢昊,张文驰.2024.生成式知识迁移的SAR舰船检测.遥感学报,28(2): 470-480
Lou X,Wang H,Lu H and Zhang W C. 2024. SAR ship detection through generative knowledge transfer. National Remote Sensing Bulletin, 28(2):470-480
为解决基于深度卷积神经网络进行SAR舰船检测网络训练过程中数据获取、数据标注等问题,本文提出一种生成式知识迁移的SAR舰船检测框架,该框架由生成式知识迁移网络和舰船检测网络两部分组成。通过知识迁移网络生成与有标注的光学遥感图像空间分布一致且包含SAR图像特征的带标注模拟图像;使用所生成的带标注模拟图像,进一步优化舰船检测网络,以提高基于深度卷积神经网络的舰船检测的泛化性能。SAR-Ship-Detection-Datasets(SSDD)和AIR-SARShip-1.0两个公开数据集上的实验结果表明,该框架有效提高了在仅包含少量标注SAR图像样本情况下的舰船目标检测效果,可显著降低舰船在复杂背景图像中漏检和误检的概率。
To address data acquisition and labeling data in the training process of SAR ship-detection network based on deep convolutional neural network
we propose a SAR ship-detection framework via generative knowledge transfer of a knowledge transfer network for SAR image generation and a SAR ship-detection network. The knowledge transfer network consists of three parts: a cycle consistency GAN to synthesize virtual features which have spatial distribution of optical image domain and feature distribution of SAR image domain as well; We further use an identity loss to encourage pseudo-SAR images generated by the knowledge transfer networks to have more of the intrinsic features of SAR images. To alleviate the SAR feature confusion issue
we introduce a feature boundary decision loss to maximize the decision boundary of real SAR features and the pseudo ones. Therefore The knowledge transfer network generates pseudo-SAR images consistent with the spatial distribution of labeled optical remote-sensing images and has a feature distribution similar to those of SAR images. Our proposed method is evaluated from three aspects: (1) The evaluation on the generated pseudo-SAR images. When the object detection network is trained on 70% of SSDD and the pseudo-SAR images
The remaining 30% of SSDD is test set
the AP can reaches 97.50%. As for 0%
10%
20%
30%
and 50% of SSDD
the AP is 64.55%
91.14%
94.69%
96.21%
and 96.84%
respectively. When there is no real SAR images involved in the training process
Ap can still reach 64.55%. (2) Ablation study on loss functions. On the basis of using cycle consistency loss in knowledge transfer network,the best performance comes when applying both the identity loss and the feature boundary decision loss
the AP reaches 64.55%. (3) The evaluation on the ship detection network. The generated pseudo-SAR images in this paper are used in the training process of SSD
Faster R-CNN and YOLOv3 detection networks
which can increase the object detection network to learn more parameters suitable for SAR images
thus improving the detection effect of the network. Experiments in the above three aspects prove the effectiveness of the proposed method.
SAR目标检测深度学习图像生成迁移学习
SARobject detectiondeep learningimage generationgenerative adversarial networks
Beltramonte T, Braca P, Bisceglie M D, Simone A D, Galdi C, Iodice A, Millefiori L M, Riccio D and Willett P. 2020. Simulation-based feasibility analysis of ship detection using GNSS-R delay-Doppler maps. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13: 1385-1399 [DOI: 10.1109/jstars.2020.2970221http://dx.doi.org/10.1109/jstars.2020.2970221]
Bochkovskiy A, Wang C Y and Liao H Y M. 2020. YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv: 2004.10934 [DOI: 10.48550/arXiv.2004.10934http://dx.doi.org/10.48550/arXiv.2004.10934]
Choi Y, Uh Y, Yoo J and Ha J W. 2020. StarGAN v2: diverse image synthesis for multiple domains//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE: 8185-8194 [DOI: 10.1109/cvpr42600.2020.00821http://dx.doi.org/10.1109/cvpr42600.2020.00821]
Demars C D, Roggemann M C and Havens T C. 2015. Multispectral detection and tracking of multiple moving targets in cluttered urban environments. Optical Engineering, 54(12): 123106 [DOI: 10.1117/1.oe.54.12.123106http://dx.doi.org/10.1117/1.oe.54.12.123106]
Girshick R. 2015. Fast R-CNN//2015 IEEE International Conference on Computer Vision (ICCV). Santiago: IEEE: 1440-1448 [DOI: 10.1109/iccv.2015.169http://dx.doi.org/10.1109/iccv.2015.169]
Girshick R, Donahue J, Darrell T and Malik J. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation//2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE: 580-587 [DOI: 10.1109/CVPR.2014.81http://dx.doi.org/10.1109/CVPR.2014.81]
Gui Y C, Li X H and Xue L. 2019. A multilayer fusion light-head detector for SAR ship detection. Sensors, 19(5): 1124 [DOI: 10.3390/s 19051124http://dx.doi.org/10.3390/s19051124]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE: 770-778 [DOI: 10.1109/cvpr.2016.90http://dx.doi.org/10.1109/cvpr.2016.90]
Isola P, Zhu J Y, Zhou T H and Efros A A. 2017. Image-to-image translation with conditional adversarial networks//2017 IEEE Conference on Computer Vision and Pattern Recognition. Hawaii: IEEE: 5967-5976 [DOI: 10.1109/cvpr.2017.632http://dx.doi.org/10.1109/cvpr.2017.632]
Jiao J, Zhang Y, Sun H, Yang X, Gao X, Hong W, Fu K and Sun X. 2018. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection. IEEE Access, 6: 20881-20892 [DOI: 10.1109/access.2018.2825376http://dx.doi.org/10.1109/access.2018.2825376]
Li J J, Jing M M, Lu K, Zhu L, Yang Y and Huang Z. 2019. Alleviating feature confusion for generative zero-shot learning//27th ACM International Conference on Multimedia. Nice: Association for Computing Machinery: 1587-1595 [DOI: 10.1145/3343031.3350901http://dx.doi.org/10.1145/3343031.3350901]
Li J W, Qu C W and Shao J Q. 2017. Ship detection in SAR images based on an improved faster R-CNN//2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA). Beijing: IEEE: 1-6 [DOI: 10.1109/bigsardata.2017.8124934http://dx.doi.org/10.1109/bigsardata.2017.8124934]
Li K, Wan G, Cheng G, Meng L Q and Han J W. 2020. Object detection in optical remote sensing images: a survey and a new benchmark. ISPRS Journal of Photogrammetry and Remote Sensing, 159: 296-307 [DOI: 10.1016/j.isprsjprs.2019.11.023http://dx.doi.org/10.1016/j.isprsjprs.2019.11.023]
Li Z L, Wang L Y, Jiang S, Wu Y H and Zhang Q J. 2021. On orbit extraction method of ship target in SAR images based on ultra-lightweight network. National Remote Sensing Bulletin, 25(3): 765-775
李宗凌, 汪路元, 蒋帅, 吴雨航, 张庆君. 2021. 超轻量网络的SAR图像舰船目标在轨提取. 遥感学报, 25(3): 765-775 [DOI: 10.11834/jrs.20210160http://dx.doi.org/10.11834/jrs.20210160]
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y and Berg A C. 2016. SSD: single shot MultiBox detector//14th European Conference on Computer Vision. Amsterdam: Springer: 21-37 [DOI: 10.1007/978-3-319-46448-0_2http://dx.doi.org/10.1007/978-3-319-46448-0_2]
Ma W, Chen D K, Yang N and Ma C. 2018. Time-series approach to estimate the soil moisture of a subsidence area by using dual polarimetric radar data. Journal of Remote Sensing, 22(3): 521-534
马威, 陈登魁, 杨娜, 马超. 2018. 时序双极化SAR开采沉陷区土壤水分估计. 遥感学报, 22(3): 521-534 [DOI: 10.11834/jrs.20187259http://dx.doi.org/10.11834/jrs.20187259]
Pan S J and Yang Q. 2010. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10): 1345-1359 [DOI: 10.1109/tkde.2009.191http://dx.doi.org/10.1109/tkde.2009.191]
Redmon J, Divvala S, Girshick R and Farhadi A. 2016. You only look once: unified, real-time object detection//Computer Vision and Pattern Recognition. Las Vegas: IEEE: 779-788 [DOI: 10.1109/CVPR.2016.91http://dx.doi.org/10.1109/CVPR.2016.91]
Redmon J and Farhadi A. 2017. YOLO9000: better, faster, stronger//Computer Vision and Pattern Recognition. Honolulu: IEEE: 6517-6525 [DOI: 10.1109/CVPR.2017.690http://dx.doi.org/10.1109/CVPR.2017.690]
Redmon J and Farhadi A. 2018. YOLOv3: an incremental improvement. arXiv preprint arXiv: 1804.02767 [DOI: 10.48550/arXiv.1804.02767http://dx.doi.org/10.48550/arXiv.1804.02767]
Ren S Q, He K M, Girshick R and Sun J. 2017. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6): 1137-1149 [DOI: 10.1109/tpami.2016.2577031http://dx.doi.org/10.1109/tpami.2016.2577031]
Schmitt M, Hughes L H and Zhu X X. 2018. The SEN1-2 dataset for deep learning in SAR-optical data fusion. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-1: 141-146 [DOI: 10.5194/isprs-annals-IV-1-141-2018http://dx.doi.org/10.5194/isprs-annals-IV-1-141-2018]
Sun X, Wang Z R, Sun Y R, Diao W H, Zhang Y and Fu K. 2019. AIR-SARShip-1.0: high-resolution SAR ship detection dataset. Journal of Radars, 8(6): 852-862
孙显, 王智睿, 孙元睿, 刁文辉, 张跃, 付琨. 2019. AIR-SARShip-1.0: 高分辨率SAR舰船检测数据集. 雷达学报, 8(6): 852-862 [DOI: 10.12000/JR19097http://dx.doi.org/10.12000/JR19097]
Taigman Y, Polyak A and Wolf L. 2017. Unsupervised cross-domain image generation//5th International Conference on Learning Representations. Toulon: OpenReview.net
Wang C, Zhang H, Wu F, Jiang S F, Zhang B and Tang Y X. 2014. A novel hierarchical ship classifier for COSMO-SkyMed SAR data. IEEE Geoscience and Remote Sensing Letters, 11(2): 484-488 [DOI: 10.1109/lgrs.2013.2268875http://dx.doi.org/10.1109/lgrs.2013.2268875]
Wang Y H and Liu H W. 2015. PolSAR ship detection based on superpixel-level scattering mechanism distribution features. IEEE Geoscience and Remote Sensing Letters, 12(8): 1780-1784 [DOI: 10.1109/lgrs.2015.2425873http://dx.doi.org/10.1109/lgrs.2015.2425873]
Xie X Y, Xu Q Z and Hu L. 2016. Fast ship detection from optical satellite images based on ship distribution probability analysis//4th International Workshop on Earth Observation and Remote Sensing Applications (EORSA). Guangzhou: IEEE: 97-101. [DOI: 10.1109/eorsa.2016.7552774http://dx.doi.org/10.1109/eorsa.2016.7552774]
Xu F and Liu J H. 2016. Ship detection and extraction using visual saliency and histogram of oriented gradient. Optoelectronics Letters, 12(6): 473-477 [DOI: 10.1007/s11801-016-6179-yhttp://dx.doi.org/10.1007/s11801-016-6179-y]
Zhang J F, Zhang P, Wang M C and Liu T. 2019. CFAR detection method of polarimetric SAR imagery based on whitening filter under G0 distribution. Journal of Remote Sensing, 23(3): 443-455
张嘉峰, 张鹏, 王明春, 刘涛. 2019. G0分布下基于白化滤波的极化SAR图像CFAR检测. 遥感学报, 23(3): 443-455 [DOI: 10.11834/jrs.20197431http://dx.doi.org/10.11834/jrs.20197431]
Zhang T W and Zhang X L. 2019. High-speed ship detection in SAR images based on a grid convolutional neural network. Remote Sensing, 11(10): 1206 [DOI: 10.3390/rs11101206http://dx.doi.org/10.3390/rs11101206]
Zhu J Y, Park T, Isola P and Efros A A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks//2017 IEEE International Conference on Computer Vision. Venice: IEEE: 2242-2251 [DOI: 10.1109/iccv.2017.244http://dx.doi.org/10.1109/iccv.2017.244]
相关文章
相关作者
相关机构