Attention-based feature pyramid networks for ship detection of optical remote sensing image
- Vol. 24, Issue 2, Pages: 107-115(2020)
Published: 07 February 2020
DOI: 10.11834/jrs.20208264
扫 描 看 全 文
浏览全部资源
扫码关注微信
Published: 07 February 2020 ,
扫 描 看 全 文
于野,艾华,贺小军,于树海,钟兴,朱瑞飞.2020.A-FPN算法及其在遥感图像船舶检测中的应用.遥感学报,24(2): 107-115
YU Ye,AI Hua,HE Xiaojun,YU Shuhai,ZHONG Xing,ZHU Ruifei. 2020. Attention-based feature pyramid networks for ship detection of optical remote sensing image. National Remote Sensing Bulletin. 24(2): 107-115
光学遥感图像船舶检测主要面临两个挑战:光学遥感图像背景复杂,船舶检测易受海浪、云雾及陆地建筑等多方面干扰;遥感图像分辨率低,船舶目标小,对于其分类与定位带来很大困难;针对上述问题,在FPN的基础上,提出一种融入显著性特征的卷积神经网络模型A-FPN(Attention-Based Feature Pyramid Networks )。首先,利用卷积提取图像特征金字塔;然后,利用顶层金字塔逐级构建显著特征层,抑制背景信息,通过金字塔顶层的细粒度特征提高浅层特征的表达能力,构建自上而下的多级显著特征映射结构;最后利用Softmax分类器进行多层级船舶检测。A-FPN模型利用显著性机制引导不同感受下的特征进行融合,提高了模型的分辨能力,对遥感图像处理领域具有重要应用价值。实验阶段,利用公开的遥感目标检测数据集NWPU VHR-10中的船舶样本进行测试,准确率为92.8%,表明A-FPN模型适用于遥感图像船舶检测。
Ship detection on spaceborne optical images is a challenging task that has attracted increasing attention because of its potential applications in many fields. Although some ship detection methods have been proposed in recent years
many obstacles still exist because of the large-scale and high complexity of optical remote sensing images. Identifying ships from interferences
such as the features of clouds
waves
and some land architectures that are similar to ships
is difficult. Therefore
an accurate and stable deep-learning based method is proposed in this work.
The method involves three steps: First
the image feature pyramid is extracted using convolution to detect multiscale ship targets. Second
a multilevel attention feature mapping structure is constructed from top to bottom using the fine-grained features of the top layer from the pyramid to improve the expressive ability of shallow features. Finally
Softmax classifier is used for multilevel ship detection.
The experimental results based on real remote sensing images are shot by “JL-1” satellite
Google satellite
and NWPU VHR-10. The result proves that the performance of our algorithm is better than the three other state-of-the-art methods. In addition
the network was cut while ensuring accuracy. The complexity of our algorithm is reduced
and its practicality is improved by experiments and analysis.
This work proposes an attention-based method called A-FPN. However
unlike traditional algorithms
A-FPN has higher robustness and wider range of use. Furthermore
we effectively cut the network to reduce the complexity of the algorithm
thereby exhibiting the significance of our algorithm in practical applications.
光学遥感船舶检测吉林一号卫星神经网络显著性特征
optical remote sensingship detectionJL-1 satelliteneural networkattention features
Cai Z W, Fan Q F, Feris R S and Vasconcelos N. 2016. A unified multi-scale deep convolutional neural network for fast object detection//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer [DOI: 10.1007/978-3-319-46493-0_22http://dx.doi.org/10.1007/978-3-319-46493-0_22]
Chen P, Liu R Y and Huang W G. 2010. A ship detection model based on multi-distribution on SAR imagery. Journey of Remote Sensing. 14(3): 546-557
陈鹏, 刘仁义, 黄韦艮. 2010. SAR图像复合分布船只检测模型. 遥感学报, 14(3): 546-557 [DOI: 10.11834/jrs.20100312http://dx.doi.org/10.11834/jrs.20100312]
Cheng G, Han J W, Zhou P C and Guo L. 2014. Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS Journal of Photogrammetry and Remote Sensing, 98: 119-132 [DOI: 10.1016/j.isprsjprs.2014.10.002http://dx.doi.org/10.1016/j.isprsjprs.2014.10.002]
Corbane C, Najman L, Pecoul E, Demagistri L and Petit M. 2010. A complete processing chain for ship detection using optical satellite imagery. International Journal of Remote Sensing, 31(22): 5837-5854 [DOI: 10.1080/01431161.2010.512310http://dx.doi.org/10.1080/01431161.2010.512310]
Dai J F, Li Y, He K M and Sun J. 2016. R-FCN: object detection via region-based fully convolutional networks//Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates Inc
Fu J, Zheng H, Mei T. Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4438-4446.
Girshick R, Donahue J, Darrell T and Malik J. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation//Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH, USA: IEEE: 580-587 [DOI: 10.1109/CVPR.2014.81http://dx.doi.org/10.1109/CVPR.2014.81]
Girshick R. 2015. Fast R-CNN//Proceedings of the IEEE International Conference on Computer Vision. Santiago, Chile: IEEE [DOI: 10.1109/ICCV.2015.169http://dx.doi.org/10.1109/ICCV.2015.169]
He K M, Gkioxari G, Dollár P and Girshick R. 2017. Mask R-CNN//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE [DOI: 10.1109/ICCV.2017.322http://dx.doi.org/10.1109/ICCV.2017.322]
He K M, Zhang X Y, Ren S Q and Sun J. 2016. Deep residual learning for image recognition//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE: 770-778 [DOI: 10.1109/CVPR.2016.90http://dx.doi.org/10.1109/CVPR.2016.90]
Hinton G E and Salakhutdinov R R. 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786): 504-507 [DOI: 10.1126/science.1127647http://dx.doi.org/10.1126/science.1127647]
Kong T, Sun F C, Yao A B, Liu H P, Lu M and Chen Y R. 2017. RON: reverse connection with objectness prior networks for object detection//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE [DOI: 10.1109/CVPR.2017.557http://dx.doi.org/10.1109/CVPR.2017.557]
LeCun Y, Bengio Y and Hinton G. 2015. Deep learning. Nature, 521(7553): 436-444 [DOI: 10.1038/nature14539http://dx.doi.org/10.1038/nature14539]
Lin H N, Shi Z W and Zou Z X. 2017c. Fully convolutional network with task partitioning for inshore ship detection in optical remote sensing images. IEEE Geoscience and Remote Sensing Letters, 14(10): 1665-1669 [DOI: 10.1109/LGRS.2017.2727515http://dx.doi.org/10.1109/LGRS.2017.2727515]
Lin T Y, Dollár P, Girshick R, He K M, Hariharan B and Belongie S. 2017a. Feature pyramid networks for object detection//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE [DOI: 10.1109/CVPR.2017.106http://dx.doi.org/10.1109/CVPR.2017.106]
Lin T Y, Goyal P, Girshick R, He K M and Dollár P. 2017b. Focal loss for dense object detection//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE [DOI: 10.1109/ICCV.2017.324http://dx.doi.org/10.1109/ICCV.2017.324]
Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y and Berg A C. 2016. SSD: single shot multibox detector//Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer [DOI: 10.1007/978-3-319-46448-0_2http://dx.doi.org/10.1007/978-3-319-46448-0_2]
Lure F Y M and Rau Y C. 1994. Detection of ship tracks in AVHRR cloud imagery with neural networks//Proceedings of 1994 IEEE International Geoscience and Remote Sensing Symposium. Pasadena: IEEE: 1401-1403 [DOI: 10.1109/IGARSS.1994.399451http://dx.doi.org/10.1109/IGARSS.1994.399451]
Qi S X, Ma J, Lin J, Li Y S and Tian J W. 2015. Unsupervised ship detection based on saliency and S-HOG descriptor from optical satellite images. IEEE Geoscience and Remote Sensing Letters, 12(7): 1451-1455 [DOI: 10.1109/LGRS.2015.2408355http://dx.doi.org/10.1109/LGRS.2015.2408355]
Redmon J and Farhadi A. 2018. Yolov3: an incremental improvement. arXiv preprint arXiv: 1804.02767
Redmon J, and Farhadi A. 2017. YOLO9000: better, faster, stronger//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE [DOI: 10.1109/CVPR.2017.690http://dx.doi.org/10.1109/CVPR.2017.690]
Ren S Q, He K M, Girshick R and Sun J. 2015. Faster R-CNN: towards real-time object detection with region proposal networks//Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press: 91-99
Suleymanova I, Balassa T, Tripathi S, Molnar C, Saarma M, Sidorova Y and Horvath P. 2018. A deep convolutional neural network approach for astrocyte detection. Scientific Reports, 8: 12878 [DOI: 10.1038/s41598-018-31284-xhttp://dx.doi.org/10.1038/s41598-018-31284-x]
Tian S R, Sun G Y, Wang C and Zhang H. 2007. A Ship detection method in SAR image based on gravity enhancement. Journey of Remote Sensing. 11(4): 452-459
田巳睿, 孙根云, 王超, 张红. 2007. 基于引力场增强的SAR图像船舶检测方法研究. 遥感学报, 11(4): 452-459 [DOI: 10.11834/jrs.20070463http://dx.doi.org/10.11834/jrs.20070463]
Wang H, Wang Q L, Gao M Q, Li P H and Zuo W M. 2018. Multi-scale location-aware kernel representation for object detection//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA: IEEE [DOI: 10.1109/CVPR.2018.00136http://dx.doi.org/10.1109/CVPR.2018.00136]
Wang F, Jiang M, Qian C, et al. Residual attention network for image classification[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 3156-3164.
Yang G, Li B, Ji S F, Gao F and Xu Q Z. 2014. Ship detection from optical satellite images based on sea surface analysis. IEEE Geoscience and Remote Sensing Letters, 11(3): 641-645 [DOI: 10.1109/LGRS.2013.2273552http://dx.doi.org/10.1109/LGRS.2013.2273552].
相关作者
相关机构